Is the ASML machine actually the world's most complex machine under some metric, or is this a claim that someone made up? I.e., did someone actually compare the ASML machine to the Space Shuttle, LHC, Internet, and so forth and show that it is more complex under some definition? (I've done various historical questions, so I'm sensitive to how statements are sourced.)
An orthogonal question is what makes sense as a measure of complexity. One could use "number of parts" (whatever that means): NASA says the Space Shuttle has 2.5 million moving parts, while the article says the ASML machine has over 100,000 components. Another issue is how to deal with composition. A TSMC fab is obviously more complex than a lithography machine since it contains a lithography machine, but maybe the fab doesn't count as a "machine". Another issue is complexity vs parts: a 32-Gb DRAM chip has about 68 billion transistors and capacitors, but it's not extremely complex, since it's mostly the same thing repeated. And then there's the question of distribution: can you really count the Internet as one "thing"?
In either case, the secret design has the same effect, but sub secrets are the top of the top of top secret. Spies that leak sub secrets, spend a long time in Leavenworth.
Wrt the space shuttle, I would take some issue because you could say it's not just one machine, but a collection of many, for example it probably has onboard computer systems that are not always in use. It would be a bit like saying that a whole factory is "a machine". Whereas the ASML devices serve one single clear purpose.
I've been reading Dashiell Hammett detective stories from the early 1920s and it seems like cars were almost exclusively referred to colloquially as 'machines' back then.
I mean, your brain has an order of magnitude more neurons than there are people on the planet. I think humans are just incapable of wrapping our heads around the sheer number of tiny things that fit in small macroscopic spaces.
Hey, author of the piece here. Glad people seem to have enjoyed it. If you're looking for more sources on ASML/EUV I put together a bibliography of things I looked at while writing the piece https://neilhacker.com/2026/04/28/asml-article-bibliography/
Great piece, I definitely enjoyed reading through it.
I don't have a question about ASML or the machine in particular, but I am curious about your thoughts on something: I've recently noticed a fair bit of media (blogs, YouTube videos, TikTok clips) about the same thing: this machine and the EUV process. Do you think interest in this topic is just a coincidence or did something happen to cause these different content creators and authors to do a piece on it at around the same time? What caused you to do a piece on this now?
There's a thing in writing where you can make bold claims in order to give the reader an idea about what the rest of the article is going to be about - that's whats happening here, a bit of editorializing .. but do you know of a more complex machine than the ASML/TSMC production line, in terms of inputs/outputs?
I think, if one were used to calculating cyclomatic complexity, such a headline is not only amusing, but also fascinating even if it is 'wrong' by .. some value system .. because the thought exercise to come up with a more cyclomatically complex machine, is rather a fruitful challenge. And that is why writers should be allowed to editorialize, because .. after all .. this is a thought-provoking article, isn't it ..
For this thought experiment I would welcome you to contribute another measure besides cyclomatic complexity as a means of ascertaining the truth of the matter, because after all complexity is multi-dimensional, but on the basis of number of actual things that have to be qualitatively measured in order for the machine to function as intended, I can think of a few other big machines that would be in scope, but - as a person who does complex systems work professionally - I'm pretty sure that the editorializing was a way to kick off some neurons in the intended audience, and not much more than that.
However, let us continue to postulate there are other forms of complexity that can be measured - what would you suggest are the other 3 or 4 contenders for the title?
>ASML started off life within Philips, the Dutch consumer electronics giant.
Who started with light bulbs which were using the electrons for direct visual and UI/UX purposes. Some of the most simple electronic components, but quite a bit like appliances themselves. No surprise a lamp in English means either a bulb, an appliance, or both.
Vacuum tubes were the next step up in complexity and I guess you can take it from there.
In the early radio days it didn't take too many "ampules" to make a radio. Not nearly as complex as a cellphone, but bizarrely more complicated than a light bulb already.
The Edison Effect turned out to be a very strong force after all :)
At one time every building that had electronics, had vacuum tubes. When you moved a radio or TV set, you were carrying your own little vacuum chambers with you from place to place, even as late as CRT's.
With solid-state electronics like this, the vacuum chambers are much bigger, but are only located in a centralized factory process, so you don't have to carry them around with you if you want to be portable.
You wouldn't want to anyway, look how heavy they have gotten ;)
It's been mentioned before, but Chris Miller's Chip War from a few years back is an excellent, very-readable book on the topic. Goes into depth on the history and development of chips and their production. He did the rounds on the interviews back then, and it's definitely worth a read. The EUV stuff is great, but I particularly liked his history on how the USSR was always going to lose and how integral Apollo really was.
Chip War is an interesting book, but it has a lot of errors, which made it frustrating to read. For instance, it said that vacuum tube computers were hardly usable because they attracted moths, so they required constant debugging and were only used in specialized applications like cryptography. This is wrong in many ways. Most notably, Grace Hopper's story of the moth was in the Harvard Mark II computer, a relay computer. (I saw the actual moth yesterday, by the way.) Vacuum tube computers were highly popular, selling in the thousands for numerous general-purpose applications. And contrary to Chip War, ENIAC was not used to compute artillery tables during WWII, because it wasn't running until the war ended. This was just a half-page section in the book, but it messed up a lot of things.
Don't have it in front of me so I can't comment on the actual language, but wasn't ENIAC designed for the purpose of firing tables, but the time it was "completed" got used for e.g. H-bomb?
Yes. The statement in Chip War, like many others, is close to correct but still wrong. When a book gets many things wrong in an area that I know, it makes me worry about its accuracy in areas that I don't know. (See Gell-Mann Amnesia.)
Yeah… when I’m eating breakfast, a lecture is not what I’m after. I watched that Veritasium video a while back and was glued to it. Any other presentation style and I probably would have completely skipped it (thinking I’ll watch it another time knowing I would never go back to it).
I thought the power grid is the most complex machine. The power grid is a gigantic machine spanning a country or significant parts of a country. It includes all the power production plants, millions of miles of transmission and distribution lines, substations for transmission and distribution, and billions of devices consuming power for residential and industrial use. The grid ensures these billions of devices are operating at 60 Hz frequency—all the time. The grid's primary function is to maintain this frequency, no matter what.
ASML are not the chokepoint for chips. Zeiss are. ASML can hire more engineers and build more machines. Zeiss cannot hire more mirror grinders. And noone wants to train as one.
> By betting on extreme ultraviolet lithography long before it worked, ASML became the chokepoint for cutting-edge chips.
Makes one wonder: Would we be much better off of worse off if we reshaped society to do more of things, where a new technology is unlikely to work but highly beneficial in the limits? Would we sooner have 10 additional ASMLs or waste a lot of resources?
Big gambles have big risks. It's the gambler's folly, after a big win (ASML's EUV) to say, gee, we should have bought more lottery tickets! Next time, it all goes on red!
What is no longer mentioned is that ASML made another big gamble at the time they started on EUV. They decided to make an all-in-one chip making machine that took silicon and output chips (instead of matrices of chip circuits laid out on a wafer).
On paper, the machine would save a lot of money for the fab houses. IRL: no one asked for it, and no one was willing to risk their entire production on a single, untried, swiss army knife of a fabricator.
The whole program was a wash. People were reassigned and the project died a very quick death. ASML lost a ton of money on this misguided attempt, but not enough to choke them.
So, they rolled the dice twice, and one gamble paid off handsomely. If it went the other way, they'd be a smaller company, and Moore's Law would be overshooting reality. If neither paid off, they'd be DOA.
The timelines matter as well: They were working on EUV at Zeiss (who make the lensing/mirroring systems) already in 2005. That's about 20 years of development.
We could have more ASML's immediately if we eradicate the desire to covet technology for one in-group, over another.
> reshaped society
Invalidate all of ASML's patents = get cheaper chips, sooner.
It is intellectual property which gives some of us the ability to build these things and sell them to others - get rid of this phony concept and we can have more nice things...
The article might disagree. See the subsection, "The importance of tacit knowledge". OTOH, if that tacit knowledge is indeed so critical then there's less risk (e.g. regarding future investment incentives) to narrowing patent protections. OTOOH, ASML's supply chain is deep and complex, and the patent portfolio is presumably similarly diffuse, which makes it difficult to analyze or even, short of a complete patent regime overhaul, identify which patents to open up to accelerate adoption.
ASML's supply chain is deep and complex - and secret. But if it were F/OSS (just imagine it) from sand to chip, that complexity would have a wider scope of human attention applied to it.
What is happening with ASML now, once happened with the wheel.
Patents are supposed to be the antidote to industrial secrets. Of course, it doesn't really work out that way because in addition to patent writers hiding the ball or strategically layering patents and secrecy, things like tacit knowledge and organization play a huge role in exploring, building, and applying solutions. FOSS doesn't really help with the tacit stuff. It's partly why it's so difficult for projects to survive after the original authors move on. With software that's not necessarily immediately fatal as long as the software works well and is easy enough to tweak around the edges to keep it compiling and interfacing well, qualities which FOSS is meant to foster and preserve. But outside software, and especially in the industrial sphere, the loss of that tacit knowledge and organization is often immediately fatal. You can't just copy stuff, you have to rebuild all that tacit knowledge and process. Often times, like in software, the resulting product that nominally achieves the same results is built around an entirely different technical approach.
Or you could have nobody bother to invest in things like this because of no reward, or they become closely guarded trade secrets of which the Elves keep and nobody else is even allowed to know they exist.
"no reward" is weak, because of course you wouldn't make a wheel, say, unless you intended to roll somewhere.
You're basically saying "ASML's entire production line is worthless unless it is rare and coveted", which is .. obviously not true .. because of course the output is immensely useful.
The world needs more chipfabs, not less. A properly scaled chipfab in places like Broome or Santiago, or .. indeed in orbit .. would go a long way to sorting out the worlds fires.
The thing stopping us, is the international, imperial system of patents and intellectual 'property', which make nation states subservient to each other on the basis of ideas.
The ideas could be spreading far and wide, but we humans are keeping them in our cage, in which the only reward is having other cages to extract wealth from ..
If everyone could make these machines, there'd be more of these machines.
There are so many examples of this out there, already, that I find this specious "no next generation" argument to be either simply coming from bias, or ignorance.
For sure, we only care about Taiwan because there is one Taiwan. End patents: no more Taiwan problem.
> These machines are roughly the size of double-decker buses. To ship one requires 40 freight containers, three cargo planes, and 20 trucks. They are the world’s most complex objects. Each contains over one hundred thousand components, all of which have to be perfectly calibrated for the machine to produce light consistently at the right wavelength.
As a software engineer by trade, the above parable communicates to me two very important things and little else by comparison: that the machines are ultimately fragile and nowhere near "optimised", since the complexity is by own admission substantial to put it mildly; the machine is not a commodity, exactly, one of the million pieces breaking subtly likely renders it inoperable; its cost is proportional to its complexity (read: astronomic); by mere fact it's a focal point of geopolitics only supports the rest of the argument it's a machine of current stone age much like siege engines were at some point the closely guarded secret win-or-lose multiplers of feudal culture.
I mean it's certainly interesting to read about the complexity, but reducing the complexity and commoditising the whole thing is what's really going to be impressive I think :-)
I am probably speaking out against the nerd in us, and none of what I said should detract from enjoying the article or the subject, it's just that I think complexity here is the giveaway of us not having conquered UVL exactly, not quite yet :-) Or maybe we lack the right materials which would allow us to reduce the machine or make it less complex or prone to calibration related errors.
Complexity doesn't necessarily mean it's suboptimal. Lithography and nanofab are usually doing a whole range of disparate and wildly exotic processes with extreme vacuum, plasmas, electron guns; any number of crazy and dangerous process gases like H2, HF, or silane; and occasionally raw materials like iridium and rhodium. And that's all without the actual lithography. When your margin for error is measured in single atoms and your number of features per die outnumber the planet's population 2:1, physical laws start to stand in the way of simplification.
The one 'machine' encompasses more disciplines than most universities offer. It's really a whole bleeding edge factory compressed into a room.
Indeed, all this reminds me of the marvel that is mechanical timekeeping - incredibly complex engineering that would ultimately be surpassed by dirt cheap electronics.
What is the corresponding revolution in chip production? I imagine something like FPGAs for litography - a wafer that can somehow work on another wafer in a sandwich-like configuration. Such a process could potentially improve on each iteration and thus get very good, very fast.
I usually hear it as "one of the most expensive single projects" rather than the most complex. I'd bet there any many more complex, the LHC comes to mind.
In terms of precission engineering LHC is for sure more complex. But then again ISS is made out of many life-critical systems. In safety critical systems a the actual hardware may look simpler in some ways than stuff you see on the bleeding edge for that category of product, but then the complexity is in turning it into something that can be safety certifiable. Just different math entirely.
ie there's lots of fun applications for radar, some of them have very complex math involved in manufacturing processes. Then you got automotive radar, you mainly need to get the position and velocity of some objects, the math is simpler. But you have to certify that stuff for ASIL-D, no one makes you ASIL-D radars, so you combine multiple radars. 3'Bs make a D as the saying goes.. Then you gotta worry about BOM costs because you want to ship 10 million cars..
It is unavoidable that, at some point, China will have its own matching or better machine because they obviously how incredibly strategically important it is.
Non-zero chances - yes. Unavoidable - I wouldn't be so sure. I can't imagine how many top human-hours and cutting-edge inventions involved to construct this machine. And much of this simply cannot be stolen or bought, no matter how much money you have.
One can ballpark it, during EUV commercialization, ASML had 15k employees, Zeiss 3k, Cymer 1k. 20 years of non priority commercial development, lots of setbacks. Final integration ~5k suppliers. For reference commercial aviation Boeing/Airbus with as 100k employees, 50k suppliers. And we don't even know it's correct technical roadmap. Initially they thought synchrotron better than plasma/LPP but went with latter because synchrotron too expensive, now EUV machine prices ballooned to multiple synchrotron price. Don't be surprise if we find it dead end non competitive tech in 5-10 years if PRC or JP figures out SSMB/FEL etc, LPP may become economically uncompetitive and all ASML EUV becomes stranded assets. This real possibly because while ASML LPP works, it works at far higher cost than original projections, i.e. it's overbudget techstack with lethal scaling costs.
On paper EUV relatively modest undertaking vs commercial aviation, EUV deeper integration vs commercial aviation breadth, but in terms of scale of effort for nation state coordination, EUV probably all things considered, easier to replicate because it has no regulatory slowdown, it's purely host country physics problem. Having enough talent and throwing it at problem x espionage x poaching talent x time will likely solve precision physics problem sooner than later. Vs commercial aviation which has complicated geopolitical/regulatory hurdles and magnitude more suppliers and scale. TLDR EUV has smaller organizational surface area for determined state to pursue through concentrating $$$, talent and effort. You can buy a ex ASML to bootstrap EUV development, much harder to get globe to buy COMAC without decades of airworthiness. There's a reason western analysts predict PRC EUV in 2030s (meanwhile PRC already beat prototype estimate timeline), but probably not realistic for global COMAC in same timeframe, and PRC been hammering at commercial aviation seriously long before EUV.
That's the key - if it was done once, it can be done again, and likely it's going to be significantly cheaper/easier because it's known it can be done. We see this from olympic records (e.g., the 4 minute mile was a "barrier" until one day it was passed and suddenly a bunch of people passed it).
Of course, doing it "legally" is another question - someone in the US trying to replicate would likely run into patent and other issues.
But a top-secret Manhattan-style project done by the US or China? definitely doable, and if you add spy-shit in, perhaps even faster.
It has never happened in the history of the world that a company or country could maintain its technological advance indefinitely.
Either China will catch up on this or that particular technology will become obsolete. But it is certain that they won't stay behind forever (measured in a small number of decades at most).
There is no doubt that less than 10 years will be needed for China to be able to do something equivalent to what the ASML machines can do now.
What is far less certain is what ASML will be able to do at that time, i.e. if they will be able to progress significantly over the state-of-the-art of today, or they will reach a plateau.
Besides China, there is a renewed effort in Japan to become competitive again, so ASML may face in the future both Chinese and Japanese competitors.
I mean you’ve definitely just had technology disappear though, usually because of war. Damascus Steel was a lost military tech. We could certainly end up just accidentally (or worse, intentionally) bomb this stuff out of existence so nobody has it.
This is kind of like saying you can prove everyone dies based on the evidence that everyone who is not currently alive has died.
You might place an upper limit using history but in this case I'd guess that limit would end up being much larger than the present semiconductor industry itself might last.
I'd say it is more likely than within 20 years the domestic Chinese semiconductor industry will be state-of-the-art across the full vertical and horizontal range.
There is a level of arrogance in the West that China does cheap but simple/low quality whereas this is only a stepping stone along the way. German car manufacturers went into China during the 90s with that mindset, and expecting it was forever, well they don't think that anymore...
“Retaining the best workers is especially crucial in an area like photolithography, where a huge amount of tacit knowledge is used to assemble its machines. An ASML engineer once told He Rongming, the founder of Shanghai Micro Electronics Equipment, one of China’s top ASML competitors, that the company wouldn’t be able to replicate ASML’s products even if it had the blueprints. He suggested that ASML’s products reflected ‘decades, if not centuries’ of knowledge and experience. ASML’s Chinese competitors have systematically attempted to hire former ASML engineers, and there is at least one documented case of a former ASML employee unlawfully handing over proprietary information. But none of this appears to have narrowed the gap.”
i find it hard to believe that there is no equivalent anywhere else in the world. there is so much talent out there and the stakes are so high that it seems like an inevitability.
whatever many secrets are involved, information wants to be free and it's hard to believe that others won't figure it out.
by the time they do catch up we better be steps ahead. what's after EUV?
- ASML's High-NA EUV machines ready for high-volume production
- Machines have processed 500,000 wafers, showing technical readiness
- Full integration into manufacturing expected in 2-3 years, ASML's CTO says
After that, it may be X-rays.
A disruptive step would be to move to 3D printing, but that (among other issues) is too slow at the moment. Maybe, ideas from nano robotics (https://en.wikipedia.org/wiki/Nanorobotics) can help there.
> A disruptive step would be to move to 3D printing
The lithography equivalents of that are laser direct write lithography and e-beam lithography. They've been used for decades in research labs, but they're impossibly slow for any mass production.
Atomic Semi are trying to make some derivative of these processes happen at a commercial scale.
> i find it hard to believe that there is no equivalent anywhere else in the world. there is so much talent out there and the stakes are so high that it seems like an inevitability.
Well, even jet engine manufacturing is something that China is behind in (relatively speaking), and it (seems?) is simpler than some of the stuff in EUV machines.
Honestly I thought the same, but after watching a couple of videos on how EUV actually works, and what ASML (and the 1,200 other specialized companies that feed into its supply chain) built..
I can understand why you can't just take one apart and copy it.
There's (apparently) 4 decades of accumulated cutting edge scientific research that has gone into these machines.
I suspect the machinery, process and human expertise required to simply produce the parts required for these machines is the real moat (oh and I guess the US-led export controls too).
The build tolerances for components are incredible. There are 11 primary mirrors in an EUV machine, each one has something like 100 coats of ultra-pure materials that are precisely deposited in picometer-thick layers with tolerances in the nanometers, across a 1-meter wide curved surface.
Then you have to position the mirrors perfectly inside the machine, again with tolerances in the nanometers.
So even if you know what you need to do, having the equipment and expertise to do it is a different thing.
And that's just one part of the 100,000+ parts that make up an EUV machine.
I worked on part of it in 2006-8. I noticed that our office waste wasn't being shredded, and asked my boss why not...
"With all the problems we have getting this to work? We ought to ship our drawings to our competitors to slow them down!"
Very tongue-in-cheek, but... yeah. The entire machine underwent a massive overhaul when it was discovered that bare, unoxidized titanium in the presence of elemental hydrogen would absorb so much it became brittle. Who knew? Maybe some few chemists, but none worked in ASML design, as it happened.
The straregic importance is vastly over hyped. Maybe by people who want to sell chips. Actual physical feature size shrinkage rate has dramatically slowed from maybe a decade ago. making more efficient algorithms or architectures will beat out trying to fight physics.
Every part of this technology is astounding and you need a reasonable basis in physics to truly appreciate just how astounding it is. And the tolerances are so ridiculously precise, it boggles the mind.
Even before you get to the lithography machine you need silicon. For a long time we've known how this is done. You create what's called a boule, which is where you create a cylinder of almost pure silicon by seeding molten silicon with a crystal and slowly forming it. You then cut the boule into the silicon discs we often see. That machining and polishing itself has to be super-precise.
But I can remember when the tolerance for impurities was at 1 part per 300 million. I read recently that even 1 part per billion is now too impure. And that makes sense. The biggest chips are what? 80 billion transistors? I seem to remember NVidia makes chips in that range (or rather TSMC does for NVidia). At 1ppb that might make ruining your chip just too likely.
So my point is that there's a whole industry to make super-pure silicon which itself took amazing advancements and without that this machine would be a lot less useful.
Another part that amazes me is just how pervasive multiple layers on chips have become. I can remember when that was novel. The upper layers are made by cheaper machines with EUV reserved for a transistor "base layer" where all the interconnects really are.
It's amazing just how many problems had to be solved to make this posible.
My understanding is step size has divergred from physical feature size for the past decade or so eg. 3 nm step (marketing) may actually be 42 nm physical. so in other words progress has slowed (diminishing returns to inverse scaling)
(The video describes the actual lithographic laser.)
And it's a source of serious hazardous waste products. It's a tin-ion laser, operating in an ultra-pure vacuum, on an unbelievably high-energy band (even laser "lines" have definable bandwidths). There's really not a lot of wiggle room in materials selection for the laser.
If there's really such a bottleneck around ASML, why not design some extra chips for legacy processes that presumably already have well known design workflows?
I mean we're not talking AMD FX and Core 2 Duo here, it's Raptor Lake and Zen 3, it's perfectly viable and still being sold in droves right now.
That’s what the likes of AMD with their chiplet design have been doing.
There’s also the issue of older process nodes not being profitable enough anymore, which explaines why at the height of the chip supply crunch older ARM chips were in short supply but there was ample stock of the 20nm feature-sized RP2040.
This is gonna sound super dumb, but I'm not sure how they aren't being profitable if there are shortages, just price things beyond break even level? The average person can't even tell the difference between a Core 5 and a Core 5 Ultra, you can practically sell them at the same price and I'm not even sure they'd notice when actually using them. The performance jump is relatively minor and the bottlenecks are elsewhere.
Part of those prices aren't something the manufacturer can adjust. Whether you're building 60nm or 20nm chips, you need pretty much the same silicon wafers, the same ultra pure water, the same chemicals and the same personnel. And as a bonus, you're not gonna be getting as many of the same chips on that wafer.
And sure, a chip layout can be shrunk; but that requires a whole new recertification cycle.
It mostly comes down to the consumer market not being significant enough by itself. A consumer may not notice a 10% increase in performance per watt or dollar. A large office building probably will, and a datacenter definitely will.
I don't think I'm being entirely hyperbolic when I say the consumer market only exists to put devices that can connect to and feed the datacenter loads into the general populations hands.
Isn't exactly this what China is doing? Apart from poaching ex ASML employees? Now reaching 7nm, and just throwing up more energy to catch up in FLOPS like Jensen said?
Because very large share of market now are datacenters. Difference from desktop is dramatic - for desktop really acceptable very simple chips with bad energy efficiency, but DCs already deal with extremely high power consumption, as they typically "compress" so much consumption in one rack, that constantly working near to physical constraints.
That's the AI hype narrative, but aren't server CPUs only like 25% of the total market? That's tiny compared to consumer volume, though revenue is likely on par given the higher cost per unit.
> aren't server CPUs only like 25% of the total market?
Yes and no. If just formally calculate, yes, servers are small market volumes. But, they are much less constrained financially, than private person, so from same fab one could earn much more money if sell to server market, than if sell to consumer market.
I don't think that's correct, server chips aren't really "more expensive" than consumer chips when you correctly account for performance. Older-gen server chips have comparable performance to new top-of-the-line consumer chips and sell for a similar price. Newer-gen server chips in turn are priced at a premium over the current value of the older-gen, to account for their higher performance. The lower financial constraints don't enter into it all that much.
For many years until about a decade ago (more precisely until the launch of the Intel Skylake Server processors) the server CPUs had a performance per dollar comparable to desktop CPUs so the expensive server CPUs were expensive because of their higher performance.
But since then the prices of server CPUs have ballooned and now their performance per dollar is many times worse than for desktop CPUs. Server CPUs have very good performance per watt, but the same performance per watt is achieved with desktop CPUs by underclocking them.
The only advantage of server CPUs is that they aggregate in a single socket the equivalent of many desktop CPUs, including not only the aggregate number of cores, but also the aggregate number of memory channels and the aggregate number of PCIe lanes. Thus a server computer becomes equivalent with a cluster of desktop computers that would be interconnected by network interfaces much faster than the typically available Ethernet links.
While for embarrassingly parallel tasks a server computer will cost many times more than a cluster of desktop computers with the same performance, it will have a much less disadvantage or it might even have a better performance/cost ratio for tasks with a lot of interprocess/interthread communication, where the tight coupling between the many cores hosted by the same socket ensures a lower latency and a higher throughput for such communication.
The owners of datacenters are willing to pay the much higher prices of modern server CPUs because the consolidation into a single server of multiple old servers brings economies in other components, due to less coolers, less power supplies, less racks, simpler maintenance and administration, etc.
While the prices of server CPUs at retail are huge, the biggest costumers, like cloud owners, can get very large discounts, so for them the difference in comparison with desktop CPUs is not as great as for SMEs and individuals. The large discounts that Intel was forced to accept during the last few years, to avoid losing too much of the market to AMD, were the cause why Intel's server CPU division has lost many billions of $.
You can't make desktop computer 4 times larger but there's very little preventing you form putting 4 racks where you had 1 before. If the floor space is the expensive part of data center then probably some incentives are misaligned.
For about price of land and connectivity - in large city land price begin on few millions dollars per square kilometer, and usage of cable channels could cost from 50$ per meter (easy could be 200$/m).
Plus, space arrange could last years.
Heat dissipation in range of megawatts could be just prohibited by local regulations.
So, space in large cities is very serious problem, and for business it is usually easier to "compress" as much computing power as possible in one rack.
AMD hiding Threadripper behind their back: Uh yeah what a terrible idea, we definitely didn't actually do that. Making a CPU that's twice the size, how ridiculous would that be right?!
You cannot place dc anywhere, in large cities space is extremely constrained, and land is extremely expensive.
Also big problem - connectivity - you cannot place DC where it cannot be connected to power grid and to very powerful network.
So yes, DC floor space is severely limited.
And the third issue - last decades, rack servers dissipate extremely large amounts of heat, I hear numbers up to tens Kilowatts per rack, which is just hard to dissipate with air cooling (as example, all IBM Power servers have option of liquid cooling, but this is totally different price range).
Gemini tells me: calling an ASML machine a "giant refrigerator" is actually a pretty sophisticated observation. While it doesn't store milk, a massive portion of that plumbing is dedicated to thermal management and vacuum systems. (...)
"and vacuum systems" is the key here. The refrigeration is a well-solved issue.
There's impeller-type vacuum engines, which is what most people think about.
Then there's multiple-stage fans, where the purpose is to overcome random atomic vectors: an atom flying the wrong direction is more likely than not to hit a fan blade and bounce vaguely toward the output direction. Extra stages increase the odds of outward vectors, instead of rebounding off walls in some unhelpful direction. These are needed when the pressure is already so low that gas atoms don't hit each other, so they act like particles instead of gases.
There's also molecular getter pumps, that are reactive coatings inside the vacuum. Their purpose is to permanently adhere any stray molecules that tend to cling to surfaces (like H2O), so they won't eventually decouple and ruin the vacuum.
Each is used to reach increasing levels of "vacuum", which is more like "single-molecule denial gates" at that point.
An orthogonal question is what makes sense as a measure of complexity. One could use "number of parts" (whatever that means): NASA says the Space Shuttle has 2.5 million moving parts, while the article says the ASML machine has over 100,000 components. Another issue is how to deal with composition. A TSMC fab is obviously more complex than a lithography machine since it contains a lithography machine, but maybe the fab doesn't count as a "machine". Another issue is complexity vs parts: a 32-Gb DRAM chip has about 68 billion transistors and capacitors, but it's not extremely complex, since it's mostly the same thing repeated. And then there's the question of distribution: can you really count the Internet as one "thing"?
It's kind of pointless to fret about whether it's "the most complex" like there's an objective 1-dimensional ranking that even has utility.
Otherwise we might as well say the ASML machine is in orbit around the galactic center.
In either case, the secret design has the same effect, but sub secrets are the top of the top of top secret. Spies that leak sub secrets, spend a long time in Leavenworth.
I don’t know that many people would classify the Space Shuttle as a machine. It doesn’t make anything.
Wrt the space shuttle, I would take some issue because you could say it's not just one machine, but a collection of many, for example it probably has onboard computer systems that are not always in use. It would be a bit like saying that a whole factory is "a machine". Whereas the ASML devices serve one single clear purpose.
Off topic: Does it blow anyone else's mind that a DRAM chip has more transistors on it than there are humans on the planet?
I don't have a question about ASML or the machine in particular, but I am curious about your thoughts on something: I've recently noticed a fair bit of media (blogs, YouTube videos, TikTok clips) about the same thing: this machine and the EUV process. Do you think interest in this topic is just a coincidence or did something happen to cause these different content creators and authors to do a piece on it at around the same time? What caused you to do a piece on this now?
I think, if one were used to calculating cyclomatic complexity, such a headline is not only amusing, but also fascinating even if it is 'wrong' by .. some value system .. because the thought exercise to come up with a more cyclomatically complex machine, is rather a fruitful challenge. And that is why writers should be allowed to editorialize, because .. after all .. this is a thought-provoking article, isn't it ..
However, let us continue to postulate there are other forms of complexity that can be measured - what would you suggest are the other 3 or 4 contenders for the title?
ASML: Complexity as a strategic resource
ASML: The most hard to reproduce machine in the world
ASML: One of Europe's most complex strategic resources
>ASML started off life within Philips, the Dutch consumer electronics giant.
Who started with light bulbs which were using the electrons for direct visual and UI/UX purposes. Some of the most simple electronic components, but quite a bit like appliances themselves. No surprise a lamp in English means either a bulb, an appliance, or both.
Vacuum tubes were the next step up in complexity and I guess you can take it from there.
In the early radio days it didn't take too many "ampules" to make a radio. Not nearly as complex as a cellphone, but bizarrely more complicated than a light bulb already.
The Edison Effect turned out to be a very strong force after all :)
At one time every building that had electronics, had vacuum tubes. When you moved a radio or TV set, you were carrying your own little vacuum chambers with you from place to place, even as late as CRT's.
With solid-state electronics like this, the vacuum chambers are much bigger, but are only located in a centralized factory process, so you don't have to carry them around with you if you want to be portable.
You wouldn't want to anyway, look how heavy they have gotten ;)
The more straightforward video of ASML EUV is from Branch Education: https://www.youtube.com/watch?v=B2482h_TNwg
Because that vid gives an overview of the whole machine, it gives context to what each scientist is talking about in the Veritasium interviews.
https://m.youtube.com/c/Asianometry/videos?ra=m
Is this just restating the size of the same shipment three times?
Makes one wonder: Would we be much better off of worse off if we reshaped society to do more of things, where a new technology is unlikely to work but highly beneficial in the limits? Would we sooner have 10 additional ASMLs or waste a lot of resources?
What is no longer mentioned is that ASML made another big gamble at the time they started on EUV. They decided to make an all-in-one chip making machine that took silicon and output chips (instead of matrices of chip circuits laid out on a wafer).
On paper, the machine would save a lot of money for the fab houses. IRL: no one asked for it, and no one was willing to risk their entire production on a single, untried, swiss army knife of a fabricator.
The whole program was a wash. People were reassigned and the project died a very quick death. ASML lost a ton of money on this misguided attempt, but not enough to choke them.
So, they rolled the dice twice, and one gamble paid off handsomely. If it went the other way, they'd be a smaller company, and Moore's Law would be overshooting reality. If neither paid off, they'd be DOA.
> reshaped society
Invalidate all of ASML's patents = get cheaper chips, sooner.
It is intellectual property which gives some of us the ability to build these things and sell them to others - get rid of this phony concept and we can have more nice things...
What is happening with ASML now, once happened with the wheel.
Think about that.
You're basically saying "ASML's entire production line is worthless unless it is rare and coveted", which is .. obviously not true .. because of course the output is immensely useful.
The world needs more chipfabs, not less. A properly scaled chipfab in places like Broome or Santiago, or .. indeed in orbit .. would go a long way to sorting out the worlds fires.
The thing stopping us, is the international, imperial system of patents and intellectual 'property', which make nation states subservient to each other on the basis of ideas.
The ideas could be spreading far and wide, but we humans are keeping them in our cage, in which the only reward is having other cages to extract wealth from ..
If everyone could make these machines, there'd be more of these machines.
There are so many examples of this out there, already, that I find this specious "no next generation" argument to be either simply coming from bias, or ignorance.
For sure, we only care about Taiwan because there is one Taiwan. End patents: no more Taiwan problem.
As a software engineer by trade, the above parable communicates to me two very important things and little else by comparison: that the machines are ultimately fragile and nowhere near "optimised", since the complexity is by own admission substantial to put it mildly; the machine is not a commodity, exactly, one of the million pieces breaking subtly likely renders it inoperable; its cost is proportional to its complexity (read: astronomic); by mere fact it's a focal point of geopolitics only supports the rest of the argument it's a machine of current stone age much like siege engines were at some point the closely guarded secret win-or-lose multiplers of feudal culture.
I mean it's certainly interesting to read about the complexity, but reducing the complexity and commoditising the whole thing is what's really going to be impressive I think :-)
I am probably speaking out against the nerd in us, and none of what I said should detract from enjoying the article or the subject, it's just that I think complexity here is the giveaway of us not having conquered UVL exactly, not quite yet :-) Or maybe we lack the right materials which would allow us to reduce the machine or make it less complex or prone to calibration related errors.
The one 'machine' encompasses more disciplines than most universities offer. It's really a whole bleeding edge factory compressed into a room.
What is the corresponding revolution in chip production? I imagine something like FPGAs for litography - a wafer that can somehow work on another wafer in a sandwich-like configuration. Such a process could potentially improve on each iteration and thus get very good, very fast.
since replicating EUVs is close to impossible.
ie there's lots of fun applications for radar, some of them have very complex math involved in manufacturing processes. Then you got automotive radar, you mainly need to get the position and velocity of some objects, the math is simpler. But you have to certify that stuff for ASIL-D, no one makes you ASIL-D radars, so you combine multiple radars. 3'Bs make a D as the saying goes.. Then you gotta worry about BOM costs because you want to ship 10 million cars..
In this case, its the latter.
https://en.wikipedia.org/wiki/Eastern_Interconnection
On paper EUV relatively modest undertaking vs commercial aviation, EUV deeper integration vs commercial aviation breadth, but in terms of scale of effort for nation state coordination, EUV probably all things considered, easier to replicate because it has no regulatory slowdown, it's purely host country physics problem. Having enough talent and throwing it at problem x espionage x poaching talent x time will likely solve precision physics problem sooner than later. Vs commercial aviation which has complicated geopolitical/regulatory hurdles and magnitude more suppliers and scale. TLDR EUV has smaller organizational surface area for determined state to pursue through concentrating $$$, talent and effort. You can buy a ex ASML to bootstrap EUV development, much harder to get globe to buy COMAC without decades of airworthiness. There's a reason western analysts predict PRC EUV in 2030s (meanwhile PRC already beat prototype estimate timeline), but probably not realistic for global COMAC in same timeframe, and PRC been hammering at commercial aviation seriously long before EUV.
Of course, doing it "legally" is another question - someone in the US trying to replicate would likely run into patent and other issues.
But a top-secret Manhattan-style project done by the US or China? definitely doable, and if you add spy-shit in, perhaps even faster.
Either China will catch up on this or that particular technology will become obsolete. But it is certain that they won't stay behind forever (measured in a small number of decades at most).
What is far less certain is what ASML will be able to do at that time, i.e. if they will be able to progress significantly over the state-of-the-art of today, or they will reach a plateau.
Besides China, there is a renewed effort in Japan to become competitive again, so ASML may face in the future both Chinese and Japanese competitors.
You might place an upper limit using history but in this case I'd guess that limit would end up being much larger than the present semiconductor industry itself might last.
There is a level of arrogance in the West that China does cheap but simple/low quality whereas this is only a stepping stone along the way. German car manufacturers went into China during the 90s with that mindset, and expecting it was forever, well they don't think that anymore...
“Retaining the best workers is especially crucial in an area like photolithography, where a huge amount of tacit knowledge is used to assemble its machines. An ASML engineer once told He Rongming, the founder of Shanghai Micro Electronics Equipment, one of China’s top ASML competitors, that the company wouldn’t be able to replicate ASML’s products even if it had the blueprints. He suggested that ASML’s products reflected ‘decades, if not centuries’ of knowledge and experience. ASML’s Chinese competitors have systematically attempted to hire former ASML engineers, and there is at least one documented case of a former ASML employee unlawfully handing over proprietary information. But none of this appears to have narrowed the gap.”
whatever many secrets are involved, information wants to be free and it's hard to believe that others won't figure it out.
by the time they do catch up we better be steps ahead. what's after EUV?
- ASML's High-NA EUV machines ready for high-volume production
- Machines have processed 500,000 wafers, showing technical readiness
- Full integration into manufacturing expected in 2-3 years, ASML's CTO says
After that, it may be X-rays.
A disruptive step would be to move to 3D printing, but that (among other issues) is too slow at the moment. Maybe, ideas from nano robotics (https://en.wikipedia.org/wiki/Nanorobotics) can help there.
The lithography equivalents of that are laser direct write lithography and e-beam lithography. They've been used for decades in research labs, but they're impossibly slow for any mass production.
Atomic Semi are trying to make some derivative of these processes happen at a commercial scale.
Even leaving size aside, I don't think that there are any credible way to 3D print something that complex.
Lithography enables that level of complexity because each layer is done in one go. I think any alternative technologies would have that property, too.
Well, even jet engine manufacturing is something that China is behind in (relatively speaking), and it (seems?) is simpler than some of the stuff in EUV machines.
I can understand why you can't just take one apart and copy it.
There's (apparently) 4 decades of accumulated cutting edge scientific research that has gone into these machines.
I suspect the machinery, process and human expertise required to simply produce the parts required for these machines is the real moat (oh and I guess the US-led export controls too).
The build tolerances for components are incredible. There are 11 primary mirrors in an EUV machine, each one has something like 100 coats of ultra-pure materials that are precisely deposited in picometer-thick layers with tolerances in the nanometers, across a 1-meter wide curved surface.
Then you have to position the mirrors perfectly inside the machine, again with tolerances in the nanometers.
So even if you know what you need to do, having the equipment and expertise to do it is a different thing.
And that's just one part of the 100,000+ parts that make up an EUV machine.
But in this case the Chinese will just develop their own alternative, that might work as good or even better
"With all the problems we have getting this to work? We ought to ship our drawings to our competitors to slow them down!"
Very tongue-in-cheek, but... yeah. The entire machine underwent a massive overhaul when it was discovered that bare, unoxidized titanium in the presence of elemental hydrogen would absorb so much it became brittle. Who knew? Maybe some few chemists, but none worked in ASML design, as it happened.
Even before you get to the lithography machine you need silicon. For a long time we've known how this is done. You create what's called a boule, which is where you create a cylinder of almost pure silicon by seeding molten silicon with a crystal and slowly forming it. You then cut the boule into the silicon discs we often see. That machining and polishing itself has to be super-precise.
But I can remember when the tolerance for impurities was at 1 part per 300 million. I read recently that even 1 part per billion is now too impure. And that makes sense. The biggest chips are what? 80 billion transistors? I seem to remember NVidia makes chips in that range (or rather TSMC does for NVidia). At 1ppb that might make ruining your chip just too likely.
So my point is that there's a whole industry to make super-pure silicon which itself took amazing advancements and without that this machine would be a lot less useful.
Another part that amazes me is just how pervasive multiple layers on chips have become. I can remember when that was novel. The upper layers are made by cheaper machines with EUV reserved for a transistor "base layer" where all the interconnects really are.
It's amazing just how many problems had to be solved to make this posible.
Is this the correct term? Why do these long radio waves have the name "London"?
Unfortunately all that I get googling the term is a guide to local FM station frequencies.
And it's a source of serious hazardous waste products. It's a tin-ion laser, operating in an ultra-pure vacuum, on an unbelievably high-energy band (even laser "lines" have definable bandwidths). There's really not a lot of wiggle room in materials selection for the laser.
I mean we're not talking AMD FX and Core 2 Duo here, it's Raptor Lake and Zen 3, it's perfectly viable and still being sold in droves right now.
There’s also the issue of older process nodes not being profitable enough anymore, which explaines why at the height of the chip supply crunch older ARM chips were in short supply but there was ample stock of the 20nm feature-sized RP2040.
And sure, a chip layout can be shrunk; but that requires a whole new recertification cycle.
I don't think I'm being entirely hyperbolic when I say the consumer market only exists to put devices that can connect to and feed the datacenter loads into the general populations hands.
Yes and no. If just formally calculate, yes, servers are small market volumes. But, they are much less constrained financially, than private person, so from same fab one could earn much more money if sell to server market, than if sell to consumer market.
But since then the prices of server CPUs have ballooned and now their performance per dollar is many times worse than for desktop CPUs. Server CPUs have very good performance per watt, but the same performance per watt is achieved with desktop CPUs by underclocking them.
The only advantage of server CPUs is that they aggregate in a single socket the equivalent of many desktop CPUs, including not only the aggregate number of cores, but also the aggregate number of memory channels and the aggregate number of PCIe lanes. Thus a server computer becomes equivalent with a cluster of desktop computers that would be interconnected by network interfaces much faster than the typically available Ethernet links.
While for embarrassingly parallel tasks a server computer will cost many times more than a cluster of desktop computers with the same performance, it will have a much less disadvantage or it might even have a better performance/cost ratio for tasks with a lot of interprocess/interthread communication, where the tight coupling between the many cores hosted by the same socket ensures a lower latency and a higher throughput for such communication.
The owners of datacenters are willing to pay the much higher prices of modern server CPUs because the consolidation into a single server of multiple old servers brings economies in other components, due to less coolers, less power supplies, less racks, simpler maintenance and administration, etc.
While the prices of server CPUs at retail are huge, the biggest costumers, like cloud owners, can get very large discounts, so for them the difference in comparison with desktop CPUs is not as great as for SMEs and individuals. The large discounts that Intel was forced to accept during the last few years, to avoid losing too much of the market to AMD, were the cause why Intel's server CPU division has lost many billions of $.
Plus, space arrange could last years.
Heat dissipation in range of megawatts could be just prohibited by local regulations.
So, space in large cities is very serious problem, and for business it is usually easier to "compress" as much computing power as possible in one rack.
There's little need to put large datacenters in downtown Chicago and Manhattan.
Surely you don't believe that the entire chip industry had not thought of "wait what if we just make the chips bigger".
Same reason that so much work was put into increasing wafer diameter over the decades.
More chips per wafer means a lot.
Much more than for performance sake.
Also big problem - connectivity - you cannot place DC where it cannot be connected to power grid and to very powerful network.
So yes, DC floor space is severely limited.
And the third issue - last decades, rack servers dissipate extremely large amounts of heat, I hear numbers up to tens Kilowatts per rack, which is just hard to dissipate with air cooling (as example, all IBM Power servers have option of liquid cooling, but this is totally different price range).
There's impeller-type vacuum engines, which is what most people think about.
Then there's multiple-stage fans, where the purpose is to overcome random atomic vectors: an atom flying the wrong direction is more likely than not to hit a fan blade and bounce vaguely toward the output direction. Extra stages increase the odds of outward vectors, instead of rebounding off walls in some unhelpful direction. These are needed when the pressure is already so low that gas atoms don't hit each other, so they act like particles instead of gases.
There's also molecular getter pumps, that are reactive coatings inside the vacuum. Their purpose is to permanently adhere any stray molecules that tend to cling to surfaces (like H2O), so they won't eventually decouple and ruin the vacuum.
Each is used to reach increasing levels of "vacuum", which is more like "single-molecule denial gates" at that point.