Intel(英特爾)的技術前景
本帖最後由 sec2100 於 2021-9-4 19:29 編輯https://seekingalpha.com/amp/art ... amage-to-nvidia-amd
Intel Could Deal Serious Damage To Nvidia, AMD
Sep. 3, 2021
Summary
As Intel catches up from previous roadmap delays (2016-2019), Intel is becoming much more competitive.
Intel will be launching a myriad of such products in the near future. This puts Intel in a great position to deal damage to both AMD and Nvidia.
The tide is turning in favor of Intel. Investors may be recommended to pick (or switch to) the "right" side.
Intel is priced at less than 2x the market cap of Nvidia despite superior revenue and earnings.
Investment Thesis
Over the past few years, I have seen serious doubts among investors about Intel’s technology. Is Intel (INTC) really still at the forefront of technology? I would tell investors that, yes, Intel has developed very capable hardware and technology that I forecast will deal serious damage to both Nvidia (NVDA) and AMD (AMD). These products are not vaporware and will very soon come to market, and in many aspects are a level above the competition (in both CPUs and GPUs).
This article will review the progress Intel has made as part of my thesis that its improved product line-up has stopped the AMD threat as well as the ongoing collision course between Intel and Nvidia. Hence, I would argue that as the tide is turning, an investor would rather want to be invested in the company that will deal damage, rather than to be on the receiving side.
To that end, I currently view Intel's stock price as a quite compelling entry point. The stock isn't much different compared to a year ago, when Intel just delayed the then-called 7nm process. Compared to then, Intel is currently in a significantly better position.
Architecture Day 2021: Client
1. Alder Lake (CPU)
A few months ago, I already ‘warned’ investors that Intel has some really strong products on the horizon, with the game-changing Alder Lake in particular. To be specific, I predicted that Intel had developed a unique combination of its high-performance (Golden Cove) architecture together with its more efficient (Gracemont) architecture. This would allow Intel to very economically scale its core count: the efficient cores are 4x smaller in size and hence deliver much higher performance per mm2. Hence, for a given transistor/silicon budget, using more smaller cores delivers more performance than using less bigger cores. Since AMD only has one architecture, Zen, I called this capability unmatched.
Intel: The efficient core is designed to construct a throughput machine and deliver the most efficient computational density.
This is exactly what Intel confirmed during its Architecture Day 2021 presentation. Alder Lake, by using two architectures, will combine the best of single-threaded as well as multi-threaded performance in one CPU, without compromises. As I had already predicted, this will be most pronounced in the laptop space, where Intel will leapfrog AMD in core count in just one generation: going from 4 or 8 cores in Tiger Lake to 6+8 cores in Alder Lake.
Golden Cove
The above claims are confirmed once we start diving into the technology. Most notably, since the Apple (AAPL) M1, there have been claims that x86 supposedly would be stuck at four decoders. This is called a 4-wide architecture (wider means more performance per clock, although performance does not scale linearly). By contrast, Apple’s M1 is an 8-wide architecture.
However, Intel is busting this myth as Golden Cove is narrowing the gap with the M1: it is a 6-wide architecture. Although this is still less than Apple, one should keep in mind that Apple’s M1 has a frequency of ~3GHz, whereas Alder Lake is expected to go as high as 5.5GHz.
A second key spec in CPUs is what is called the out-of-order window or reorder buffer. (The bigger this window, the more possibilities a CPU has to search for instructions which it can execute in parallel.) In that regard, many people were stunned when it became known that Apple had a window of over 600 instructions. For comparison, the second-largest was Intel’s Sunny Cove with a bit over 300 instructions. However, Golden Cove has also here seen a substantial improvement, increasing to over 500 instructions.
Overall, Intel is touting a 19% average improvement in performance per clock cycle. Still, one criticism that could be mentioned is that despite Golden Cove being 50% wider than Zen 3 and also having about double the out-of-order window, the difference in performance per clock isn't expected to be all that significant (although Intel most likely will have leadership since Intel is less than 19% behind Zen 3); it seems that Intel may be leaving some performance untapped, which it may 'unlock' in subsequent generations.
In summary, Apple still has the most advanced architecture, but Intel will likely claim overall performance leadership (against both Apple and AMD) by virtue of its higher frequency. Of course, this difference in operating frequency, combined with Apple’s superior process node, means that the M1 (and the M2 in a few months) will still be much more power efficient, but that is where the Gracemont cores come into play.
Gracemont
Gracemont was optimized for two main goals: (1) power efficiency (while maintaining high performance), and (2) small silicon area.
For the first point, Intel’s biggest claim was that a cluster of four Gracemonts can deliver the same peak performance as two Skylake cores (with HyperThreading) at just 20% of the power consumption. For the second point, Intel claims that a Gracemont is 4x smaller than a Skylake core, which until two years ago was Intel’s flagship architecture on 14nm.
Given the smaller size of Gracemont, instead of making a 10-core Golden Cove CPU, Intel opted for combining 8 Golden Coves with 8 Gracemonts. In the laptop space, instead of a 4-core Alder Lake, Intel used this hybrid technology to make a CPU with 2+8 cores. Intel claimed the latter combination (2+8) delivers over 50% more performance than a 4-core Golden Cove.
Thread Director
Although heterogenous CPUs have been common in mobile for years, it represents mostly a novelty for the x86 and PC world. As such, Intel developed new technology to ensure that workloads will be scheduled to the right cores. Intel has created the Thread Director as a hardware-based solution which uses dynamic runtime information about workloads, and worked with Microsoft to ensure that Windows 11 will be able to use that information for its scheduling.
As one example, Intel said that it could move an AI workload (which would normally run on the Cove) to the Mont when the workload was executing non-AI code, and reschedule to the Cove back to the Cove when it started executing AI code.
Other stuff
Intel is also moving to the forefront of I/O technology with DDR5, PCIe 5.0, Thunderbolt 4 and Wi-Fi 6E.
Remember that just two years ago, AMD fans were making fun of Intel for being late to PCIe 4.0. So in just two years, the tables have already turned since AMD is not expected to make this transition any time soon.
Example
During the HotChips conference that followed shortly after Architecture Day, Intel provided an example that confirmed my “Game-Changing Alder Lake” thesis. In that article, I claimed that contrary to Arm’s big.LITTLE, Intel wasn’t using hybrid architecture for low power, but primarily for multi-threaded performance to catch up to AMD.
In the example, the performance of a 4-core Golden Cove was compared to a 2-core Golden Cove + 8-core Gracemont. Note that these two configurations will have about the same physical silicon area. However, Intel claimed that the second configuration delivers >50% higher multi-threaded performance.
This demonstrates Intel's hybrid advantage. Instead of Intel’s upcoming 4-core Alder Lake being just 19% faster (due to the 19% higher IPC) than Tiger Lake, Intel will deliver almost 2x the performance by using a 2+8 configuration.
Although Intel could also have accomplished such a performance increase (without using Gracemont cores) by increasing the core count to 6 or 8, the big cores are inherently less area efficient. Hence, Intel would have had to give up gross margins to catch up.
In the higher-end laptop space, Intel has opted for a 6+8 configuration (again replacing two big cores with eight smaller ones), which should deliver more performance than a hypothetical 8-core CPU.
Summary
Since my previous article about Alder Lake, Intel has confirmed what I had speculated: Intel is not interested in increasing the core count of the big cores. Instead, going forward Intel will use the Gracemont cores to deliver a very efficient "throughput machine".
For example, in the next few generations, Intel will quadruple the core count of the small core cluster from 8 to 16 to 32 cores. Since the Monts are 4x smaller in size, that means that Intel will soon be dedicating just as much silicon area to the Monts as to the Coves.
Gracemont is just as fast as Skylake, at a fraction of the power. Although more modern architectures such as Golden Cove and Zen 3-4 are already a level above Skylake, the sheer additional multi-threaded performance by using ("spamming") so many of these small cores should be substantial. Hence, this hybrid combination of Monts and Coves is a recipe to deliver both leadership single-threaded and multi-threaded performance.
2. Xe HPG and Alchemist (GPU)
Intel has also provided quite a bit of information about its upcoming discrete GPU. Four years after announcing that it would launch high-end GPUs, this is finally happening. It will still be a few months, though, as there has been a slight delay to Q1’22.
The most important information:
Renamed the DG2 chip to “Alchemist”.
Provided a roadmap of chips that follows the alphabet, and for now, goes up to Druid.
Indicated that Xe HPG achieves 1.5x the frequency of the integrated Xe LP graphics and also 1.5x the performance per watt.
Announced that the chip is manufactured on TSMC (TSM) N6.
Announced XeSS upscaling technology. It works similar as Nvidia’s DLSS 2.0, but Intel has also created a non-proprietary version of XeSS (which hence competes with ADM’s FSR) that should work on most GPUs.
From the wafer Intel showed, it has been calculated/confirmed that the chip is a bit below 400mm2 in size, which indicates (as expected) that Intel is targeting the mid-range rather than the high-end segment.
In terms of performance, in the best case, it might compete in the upper mid-range segment. Pricing will then determine if review sites will recommend the chip based on performance per dollar. As such, Alchemist seems most compelling for the laptop segment, where it should be more competitive.
However, what I found most reassuring about Intel’s disclosures wasn’t so much the Alchemist chip, but the roadmap that already goes three GPUs into the future (with Druid). This indicates that Intel may be targeting an annual cadence. This is necessary since Alchemist launches in the middle of the GPU cycles of AMD and Nvidia, since those GPUs launched last year already. In other words, Intel could further close the time-to-market gap and improve its competitiveness if it could launch a 5nm GPU by early 2023.
Another reason why Intel could be more successful in the laptop space in the near term is because it likely can more effectively utilize its existing OEM partnerships.
1. Sapphire Rapids (CPU)
One aspect that investors may be aware of throughout this discussion, is that contrary to much debate on the internet, there is actually much more to a data center CPU than just its core count.
Just like Alder Lake, Intel called Sapphire Rapids its most significant CPU in a decade. Although Sapphire Rapids does not have a hybrid configuration, the CPU is updated across the full stack. I have discussed Sapphire Rapids previously, so I will only provide the highlights.
Finally starting Intel’s transition to tiles/chiplets, Sapphire Rapids consists of up to four tiles and will be about two times the monolithic reticle limit of ~800mm2.
By leveraging EMIB packaging, the CPU has monolithic-like performance. As Intel described it, it is a physically tiled CPU, but logically monolithic. For example, every (Golden Cove) core can access all cache on any tile.
DDR5, PCIe 5.0, CXL 1.1, Optane and integrated HBM provide leadership I/O and memory.
Fundamentally, Sapphire Rapids uses the same Golden Cove architecture, but with several additional improvements for the data center. For example, Intel emphasized consistent performance for multi-tenant use-cases, as well as strong performance improvements for elastic computing models and microservices: Intel claimed an improvement of 36% over Ice Lake, which is more than the 19% increase in IPC.
Additionally, Intel put a lot of emphasis on new domain accelerator engines in Sapphire Rapids. These can deliver step-function improvements in key areas.
AMX (advanced matrix extensions): for the hot area of deep learning/artificial intelligence, Intel further cemented its already unquestioned leadership by improving performance by no less than 8x. I estimate that AMX provides Sapphire Rapids with a performance of nearly 300 TOPS (int8), which is not too far off Nvidia’s A100, which has a price tag of $10k and cannot execute regular CPU code. In essence, you get something like a V100 or half an A100 "for free" with every Sapphire Rapids CPU.
AiA (accelerator interfacing architecture): this allows for offloading tasks such as synchronization and signaling with attached accelerators.
DSA (data streaming accelerator): this is in essence a sort of on-chip IPU (see further), for offloading common data movement tasks. Intel provided an example where utilization was reduced by 39 points in a packet switching application. It supports up to 4 instances per socket.
QAT (quick assist technology): this accelerator offloads crypto and (de)compression. This is not a new accelerator, but has been upgraded 4x to 400Gbps crypto. Intel said it delivers the performance equivalent of 1000 CPU cores. For example, CPU utilization can be reduced from 100% to 2% after offloading.
Summary
Intel claims Sapphire Rapids sets a new standard for data center performance, and certainly the list of both new and updated features is impressive. Features such as AMX, DSA, AiA, integrated HBM, next-gen Optane and CXL 1.1 are unmatched. Even when it comes to regular performance on standard benchmarks (which do not use specific accelerators), although Sapphire Rapids will only max out at 56 cores, as detailed above Golden Cove is a leadership CPU architecture.
As such, I expect Sapphire Rapids to be extremely competitive in general performance, while further providing unmatched additional value through its accelerators such as AMX. These may not be visible in most benchmarks, but will nevertheless deliver real (world) value to customers.
I would caution investors against comparisons such as in the tweet above. The main issue is that author of the tweet chery-picked the comparison in AMD's favor. One could just as easily find comparisons where Intel comes out on top: since it is known that Sapphire Rapids will launch several months earlier than Genoa, one could argue that the more valid comparisons are Sapphire Rapids vs. Milan and Granite Rapids vs. Genoa.
2. Mount Evans (IPU)
Investors may have noticed that a new class of accelerators is emerging in the data center. Nvidia calls these DPU or data processing units, which it acquired through its Mellanox acquisition. Intel until recently called them SmartNICs, but is pivoting to the term IPU or infrastructure processing unit. Intel’s IPU term is much more clear in what it does, since it could be argued that every single chip ever manufactured is a DPU – without data, you can do nothing with a chip.
As the term indicates, the IPU is tasked with offloading infrastructure processing tasks from the CPU. This has become especially relevant in the cloud era, where cloud service providers’ whole business model is built on renting out their CPU cores. CPU cycles that are used for these generic infrastructure tasks cannot be monetized and hence are wasted. Intel claims 30-80% of CPU utilization is such overhead.
Intel provided an analogy with a hotel: there should be a clear separation between the tenant’s space (the hotel room) and the infrastructure. This is what the IPU provides: the infrastructure code should only run on the IPU. This allows the CPU to be fully monetized.
Intel’s strategy in IPUs is very reminiscent to its strategy in artificial intelligence: not one size fits all. As such, Intel is continuing to invest in its Ethernet- and FPGA-based SmartNICs. Additionally, as indicated above, Sapphire Rapids also has various on-CPU offloading capabilities with QAT, AiA and DSA.
The last piece of the puzzle, which Intel newly announced, is to also enter the field of discrete ASIC-based IPUs. To that end, Intel announced Mount Evans. Intel said it was developed in partnership with a leading cloud service provider.
Intel disclosed various features/capabilities of the chip, including a “best-in-class” programmable packet processing engine. This is based on the P4 language which was developed by Barefoot Networks, which Intel acquired a few years ago. As one notable detail, the chip also contains 16 Arm Neoverse N1 cores. The chip is capable of 200G networking processing and supports up to 4 host Xeons.
Summary
Intel claims it is already #1 in this space by market share and is seeking to extend its leadership and portfolio with a high-performance dedicated IPU. Since the competition has moved to ASICs, this was a necessary step to remain relevant in this space.
Still, some have noted that while Nvidia is putting a lot of AI capabilities in its DPU/IPU going forward, Intel’s Mount Evans does not have such a capability. Intel's two main competitors in this space, Nvidia and Marvell, also have already announced a 400G DPU/IPU. Nevertheless, Intel for its part claimed that the chip was designed with a top-tier cloud service provider and that 400G networks are not common yet.
3. Ponte Vecchio (GPU)
Last but not least, Intel disclosed more details about Ponte Vecchio. Most notably, Intel confirmed the compute tile is manufactured on TSMC (TSM) N5. This means Ponte Vecchio will be the first GPU for HPC and AI that is built on the latest process technology, providing an inherent advantage. Prior to Ponte Vecchio, given the discontinuation of the Xeon Phi line (and Nervana), Intel had been nonexistent in this space, for a few years already. Its dedicated Habana line for AI still has not transitioned to even TSMC 7nm, so this means Ponte Vecchio will be Intel’s flagship product for AI and HPC next year.
Does the chip deliver? Based on Intel’s disclosures, I have calculated that Ponte Vecchio could achieve up to 2 POPS of int8 performance (at 2GHz), which is 2000 TOPS. For comparison, Nvidia’s latest A100 only delivers a bit over 600 TOPS. That means Ponte Vecchio comfortably delivers 3x the peak performance. Although Nvidia will likely also transition to 5nm later in 2022, Intel seems very well positioned with a competitive offering.
For the first time in over half a decade, Nvidia will be receiving real competition, and Nvidia’s customers will be eager to at least use Intel as a negotiation ploy for lower pricing. This could eat into Nvidia's substantial gross margins.
However, despite this theoretical throughput, from the practical numbers Intel shared, it is obvious Intel is still a bit behind in optimizing its software. In particular, the only benchmark Intel showed was the old ResNet-50, in which Intel beat the A100, but only by a very small margin; not by the up to 3x that may be possible.
The less mature software ecosystem may slow down adoption. So although Ponte Vecchio is a level above Nvidia in every single area, I would not necessarily expect Nvidia's data center revenue to collapse overnight: similar to how neither has Intel's market share collapsed because of AMD.
Should investors be worried about this? Not particularly. It has been known that tremendous performance gains can be achieved by optimizing the software to make most use of the hardware. Nvidia has obviously been doing this for several years already. Intel has also been doing this for years already, but mostly on the CPU side. Intel also emphasized that it was using A0 silicon.
Summary
Intel called Ponte Vecchio a moonshot: a GPU that would leapfrog the competition and make Intel again relevant in AI/HPC in just one generation. With over 2x the amount of transistor and over 2x the amount of theoretical throughput, this is certainly what Intel has delivered (in part because of the N5 process advantage). Intel has been heavily investing in an open industry software ecosystem with "oneAPI", but still appears to have some work left.
Financials
For a more in-depth discussion about how this all impacts Intel's top and bottom line, and the stock performance, I would refer to my quarterly earnings discussions.
From a high-level, though, my thesis for Intel has mainly been based on the overall semiconductor market growth over time, especially in the data center. However, to capitalize on the rising demand for semiconductors (whether in PCs, in the cloud or elsewhere), Intel preferably needs leadership products, which has been a point of major concern over the last few years. (If Intel fails to deliver a strong portfolio, then it could lose more market share than the market grows.)
I still expect AMD to continue to gain some share, worst-case, but it is known that AMD also aims to increase its gross margins, so AMD will also have to make trade-offs between pricing and market share. But especially in the data center, Intel's CPUs as discussed will be heavily improved going forward, which Intel can still bundle with the rest of its portfolio such as IPUs, FPGAs, NPUs and GPUs.
With that out of the way, I currently view Intel's stock price as a quite compelling entry point. The stock isn't much different compared to a year ago, when Intel just delayed the then-called 7nm process. Compared to then, Intel is currently in a significantly better position (with 7nm/4 fixed, the new foundry business, the products discussed above, the new CEO and management, etc.)
For example, both TSMC (TSM) and Nvidia have about twice the market cap, even though Intel still has greater revenue and earnings, and is vying to achieve leadership in both the foundry space (where TSMC currently leads) as well as the GPU/AI space (as discussed above).
Intel Weaknesses
Although Intel showed great technology at Architecture Day, investors will also be looking for any weaknesses or risks.
On the client side, I agree with Intel's strategy. For a few years now, the core count of Intel's CPUs has been an issue. Hybrid architecture is the best solution to provide both best-in-class single- and multi-threaded performance, without sacrificing gross margins.
On the data center side, however, I will be looking forward to reviews and benchmarks of Sapphire Rapids. In particular, since Sapphire Rapids only uses the big P-cores, the amount of silicon area required to reach a core count of 56 (which is still below AMD) is enormous. Although Intel wins a bit in yield/cost due to the tiled chiplet design, Sapphire Rapids has twice the size of the largest monolithic chip possible.
As such, for applications that do not use some of those accelerators that have increased the die area of the chip, there may be quite a bit of idle/wasted silicon. On the other hand, Intel explicitly said it did not design Sapphire Rapids just for arbitrary benchmarks, but for real data center workloads. While most investors would judge a CPU only based on its core count, Intel argued in a lengthy article that that represents a superficial view.
Still, I would expect that Sapphire Rapids represents just the first iteration in Intel's chiplet design approach. In the future, there could be three solutions. First, Intel may design its CPUs more like Ponte Vecchio, with much smaller chiplets. Secondly, Intel may introduce the E-core in the data center, either as an E-core only CPU, or as part of a hybrid CPU. Thirdly, some customers may turn to Intel Foundry Services to tailor Intel's IP to their requirements.
Investor Takeaway
Intel has the potential to deal serious damage to its two main competitors, with its latest products. In this article, I have provided an overview of the technology.
Intel is delivering a new leadership x86 architecture, which it will use in both Alder Lake and Sapphire Rapids. On the client side, Intel is addressing its most urgent pain point (multi-threaded performance) with its innovative use of hybrid architecture. In the data center, Intel’s lower core count could be made up for with higher performance per core, and Intel also has many accelerators and other leadership features that AMD does not have. Nevertheless, these accelerators are only useful if they are actually used. Overall, Intel will be providing a very compelling and differentiated feature set to finally close the gap that AMD had created in the last few years.
In addition, Intel is further rearchitecting the data center by investing in its IPU portfolio, while also delivering the first substantial competition to Nvidia’s GPU monopoly in years, with Ponte Vecchio.
Lastly, although sales of GPUs are booming, many gamers are dissatisfied due to supply issues as well as the pricing environment. This provides Intel with an opportunity to become a third provider of GPUs. Since Intel’s initial GPUs will not have leadership in performance, their competitiveness will depend on pricing. I remarked that initial uptake may be strongest in the laptop segment if Intel can leverage its partnerships with OEMs (although it would have to displace Nvidia either way), as well as the multi-year roadmap.
In summary, this technology will be cornerstone for Intel's resurgence in the coming years; Intel does not need to wait for its fabs to regain competitiveness. These products have already been in the pipeline long before Pat Gelsinger even rejoined Intel, however, combined with the powerful IDM 2.0 strategy, this could be the base for a multi-year journey for revenue growth, and hence perhaps shareholders returns and P/E multiple expansion.
On the stock, my view remains that Intel is well-positioned to become the first trillion-dollar semiconductor company ahead of Nvidia and TSMC.
Disclosure: I/we have a beneficial long position in the shares of INTC either through stock ownership, options, or other derivatives. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
最初,在Intel「唾棄」PCIe 4.0之際,市場普遍猜測Intel可能會跳過PCIe 4.0,直接過渡到PCIe 5.0。尤其在Intel的處理器核心及製程落後的當時,Intel期望從更多技術層面來挽回弱勢。在Intel的規劃表中,今年推向市場的伺服器平台Sapphire Rapids會首次導入DDR5和PCIe 5.0支援,而且支援的通道數上限還將更高。 與此同時,明年推向市場的Intel首個大小核心設計的Alder Lake將實現對PCIe 5.0的支援。PCIe 5.0本身除了帶來更高的頻寬之外,也會導入多項產業標準特性,讓企業實現更高的靈活性。2019年,Intel宣佈其CXL (Compute eXpress Link)將基於PCIe 5.0,CXL是針對高速CPU到裝置,以及CPU到儲存的開放標準互連,用於實現增強資料中心的性能。 PCIe 5.0是一種高速串列電腦擴展匯流排標準,即在電腦系統的不同元件之間以高頻寬傳輸資料。比如,CPU、GPU和各種加速處理器之間資料的傳輸,都靠PCIe這條「主幹道」。 預計使用 Intel 7 製程(原 Intel 10nm 強化版 SuperFin 製程)Alder Lake 的設計重點即是首度在高效能 x86 導入的大小核設計,以高效能且具備超執行緒的 Golden Cove 核心搭配僅有單執行緒、源自 Atom 產品線的 Gracemount 節能核心組成;在此之前 2020 年曇花一現的 Lakefield 處理器則是以一顆高效能核心搭配四個節能核心構成,但畢竟應用領域特殊,也僅有少數產品採用。
Alder Lake 的大小核設計,或許可說是 Intel 自目前運算業界、行動運算與使用者實際體驗等多方面需求得到的解答,目前在行動運算大小核設計已經相當普及,藉由大小核,能使系統複雜的工作執行緒依照運算負載的需求更有效率的分配資源,尤其在目前多工使用的情境之下,能夠幫助系統電力做更好的分配,畢竟在許多多工的使用情境,不是所有核心都需要滿載運作,有些工作執行緒僅需要簡單的處理性能即可執行,但在傳統的 x86 處理器的單一架構,無論工作量的多寡皆是使用同樣規模的核心進行處理。 至於扮演整體效能提升關鍵的 Golden Cove 是 Tiger Lake 的 Willow Cove 的發展型態,旨在提供低延遲與更出色的單工效能,為了達成目度, Golden Cove 進一步提升解碼器的結構,同時也強化亂序引擎,執行單元也自 4 個 ALU 提昇到 5 個 ALU ,另外也提升快取的通道數量,使存取快取的速度提升,也進一步使延遲更低,最終的結果是在相同的時脈之下, Golden Cove 比起 Rocket Lake 的 Cypress Cove 提升 19% 效能。不過值得注意的是, Golden Cove 架構在設計上支援新一代的 AMX 指令級,然而在消費級的 Alder Lake 並不能使用,原因則在起因於大小核設計。 畢竟 Golden Cove 與 Grace Mount 屬於不同層級的架構,雖同為 x86 指令級作為基礎,卻有部分的矛盾之處,除了上面提到過 Grace Mount 不支援超執行緒 HT 技術之外,由於考慮系統執行的一致性, Alder Lake 僅能夠支援兩個架構皆支援的 AVX 指令級,這是受到源自 Atom 體系的 Grace Mount 不支援 AVX512 與 AMX 等先進指令級的緣故,但 Golden Cove 架構的本質實際上是支援這兩項新世代指令級的,為了使處理器能正常運作,不得不做出齊頭式平等,最終的結果是 Alder Lake 恐無法使用基於 VNNI 的 DL Boost 加速技術。
頁:
[1]