Six decades ago, the world’s first electronic computers were developed.
They were large enough to fill an entire room, and they weighed several tons.
Today, we have computers that are thousands of times more powerful and that fit inside our pockets…
To most consumers, this fact has been nothing short of a miracle of modern science. To technologists, though, it’s the result of a general engineering law first described by Intel co-founder Gordon Moore.
In 1965, Moore wrote an op-ed in Electronics magazine predicting the future of the computing industry over the following 10 years. In the article, Moore had observed that the number of components (transistors, diodes, capacitors, etc.) in an integrated circuit (IC) had been doubling every year. He concluded, in turn, that this pattern would continue for the next decade.
Some years later, Moore reflected on his prediction in an interview, remarking that he had “no idea” at the time of writing whether or not his prediction was going to be accurate. As it turned out, though, he was pretty close to the target.
By 1975, the number of components packed onto an IC had doubled nine times. Moore was off by just one doubling, giving enough credence for “Moore’s Law” to be popularized in the field of technology up until this day.
The public’s perception of Moore’s Law, though, is more often misguided than not. After seeing only nine doublings between 1965 and 1975, Moore had actually adjusted his prediction decades ago, slowing the rate of doubling to “every couple of years.”
Nonetheless, an Intel employee by the name of Dave House would later write his own bastardized version of Moore’s Law and publish it on the company’s website. The website read that every 18 months computing power would double, a prediction that’s manifested into what’s now the most commonly cited definition of “Moore’s Law.”
The point is, Moore’s Law doesn’t quite hold up as many people believe it does. As Moore had correctly observed by 1975, the rate of doubling was actually slowing over time. This is a fact that’s become ever more apparent today.
Today, it is widely recognized by experts in the field of electronics that Moore’s Law isn’t just dying; it may already be dead.
You see, for many years, the semiconductor industry had treated Moore’s Law as a kind of self-fulfilling prophecy. Global consortiums published roadmaps for preserving Moore’s law, but eventually the manufacturing facilities to build these chips became too expensive and the physical limitations became too difficult to overcome.
In 2016, the International Technology Roadmap for Semiconductors (ITRS) issued its final roadmap for preserving Moore’s Law, effectively ceding that the industry could no longer keep up the pace.
It was the beginning of the end, but also a new beginning in and of itself.
The Death of Moore’s Law and the Birth of Something New
While it may be fair in some respects to say that Moore’s Law is dying, it’s probably more accurate to frame where we are today as a kind of metamorphosis for computers. Like a caterpillar turning into a butterfly, the unraveling of Moore’s Law is ultimately going to be a boon for the technology industry.
To understand why this is so, it helps to have a very basic understanding of how conventional computer chips work. Today’s chips contain any number of transistors, which operate like switches with two distinct electronic states (on/off). These states are determined by whether or not electrons are flowing through the transistor.
These on/off states are eventually represented in computer code as 0s and 1s. A chip with billions of transistors, in other words, can store billions of 0s and 1s on a computer.
In short, the more transistors you can fit on a chip, the faster that chip is going to be, because it can store more information at any given moment.
After decades of Moore’s Law playing out, though, today’s transistors are now approaching the size of an atom, which creates a major problem: Electrons flowing through these tiny transistors can randomly disappear, messing up a chip’s calculations (a bit of quantum physics going on here not worth getting into).
In other words, there are hard physical laws that effectively prevent us from fitting more and more transistors on a chip. We’re about at that limit already, which means computers will either stagnate or adopt something new.
The Best Free Investment You’ll Ever Make
Join Wealth Daily today for FREE. We’ll keep you on top of all the hottest investment ideas before they hit Wall Street. When you become a member today, you’ll get our latest free report: “The Nvidia Killer: Unlocking the $100 Trillion AI Boom.”
It contains the most promising AI companies and sectors poised for explosive growth. Our team of expert analysts has conducted thorough market research to uncover a hidden gem currently trading at just $2.
After getting your report, you’ll begin receiving the Wealth Daily e-Letter, delivered to your inbox daily.
Integrated Circuits Poised to Go the Way of the Abacus
It’s worth mentioning at this point that before electronic computers were invented, mechanical computers were all the rage. It wasn’t until electronic circuits came around that devices like the Marchant calculator went out of style.
Today, we’re undergoing a similar transition in computing, except instead of moving from mechanics to electronics, we’re moving from electronics to the emerging field of photonics. This transition will have a massive impact on the market, particularly semiconductor companies entrenched in dead-end IC technologies.
You see, where electronics requires the movement of electrons to record and transfer data, photonics requires the movement of photons, which don’t actually occupy a physical space (weird as that might seem). As such, photonics can overcome the physical limitation of modern electronics, much in the way that electronics can overcome the physical limitations of mechanics.
The real world impact of this inevitable transition from electronic computing to photonic computing is difficult to overstate, but just know that it’s going to be big.
Notably, Bill Gates and Travis Kalanick are among those who are already eyeing this space. Both have invested in private company Luminous, which aims to build on the emerging field of silicon photonics. As the name would suggest, we’re talking about computer chips that use photons rather than electrons to transfer data.
Ultimately, this will mean a future of dense, energy-efficient computers, capable of things that most people today cannot even imagine. These computers will be smaller, faster, smarter, and cheaper than anything on the market today.
Simply put, photonics will be an enormous investment opportunity over the next several decades. I’m currently in the early stages of putting together a report on my top picks on this emerging technology, so keep an eye out in the coming months, and I’ll have something for you.
Until next time, Jason Stutman