Radeon RX Vega Unveiled: AMD Announces $499 RX Vega 64 & $399 RX Vega 56, Launching August 14th

At this point, one must give credit to AMD for their marketing program for the Radeon RX Vega. The company has opted to drip feed information over many months, and as a result this has kept the public interested in the architecture and consumer RX Vega cards. Since it was by name back in the spring of 2016, we’ve had architecture previews, product teasers, and even a new Frontier Editions to tide us over. Suffice it to say, there’s a great deal of fascination in finally seeing the products AMD has been beating the drums about for so long.

To that end, there’s good news today and there’s bad news today. In the interest of expediency, I may as well start with the bad news: today is not the launch day for the Radeon RX Vega. In fact, only right before this embargo expired did AMD even announce a launch date: August 14th. So for reviews, performance analyses, and of course purchasing, everyone will have to hold on just a bit longer.

The good news then is that even if today isn’t the Radeon RX Vega launch, AMD is finally making significant progress towards it by announcing the cards, the specifications, and the pricing. Gamers may not be able to buy the cards quite yet, but everyone is going to have some time to size up the situation before the proper launch of the cards next month. Overall this situation is very similar to the unveiling of the Radeon R9 290 series, where AMD announced the cards at a product showcase before launching them the following month.

So without further ado, let’s dive into the Radeon RX Vega family of cards and their specifications.

All told, AMD will be releasing 3 different RX Vega cards. All 3 cards are based on the same GPU, Vega 10, which powers the already released Radeon Vega Frontier Edition. So if you’re familiar with that card, then you should have an idea of what to expect here.

The top of AMD’s lineup is the Radeon RX Vega 64 Liquid Cooled Edition. This is a fully enabled Vega 10 card and it has the highest clockspeeds and highest power requirements of the stack. All told, this is 64 CUs, 64 ROPs, boosting to 1677MHz, and paired with 8GB of HBM2 memory clocked at 1.89Gbps. Typical board power for the card is rated at 345W. To cool such a card, you of course will want liquid cooling, and living up to the name the card, AMD has included just that, thanks to a pump and 120mm radiator.

The second member of AMD’s lineup is the shorter-named vanilla Radeon RX Vega 64. Unlike its liquid cooled predecessor, this is a traditional blower-type air cooled card. And for the purposes of AMD’s product stack, the company is treating the vanilla Vega 64 as the “baseline” card for the Vega 64 family. This means that the company’s performance projections are based on this card, and not the higher-clocked liquid cooled card.

The vanilla Vega 64 utilizes the same fully enabled Vega 10 GPU, with 64 CUs and 64 ROPs. The card’s reduced cooling capacity goes hand-in-hand with slightly lower clockspeeds of 1247MHz base and 1546MHz boost. Paired up with the Vega GPU itself is the same 8GB of HBM2 as on the liquid cooled card, still running at 1.89Gbps for 484GB/sec of memory bandwidth. Finally, this card ships with a notably lower TBP than the liquid cooled card, bringing it down by 50W to 295W.

Meanwhile, unlike any of the other cards in the RX Vega family, the Vega 64 will come in two shroud design options. AMD’s reference shroud is a plastic/rubber design similar to what we saw on the reference Radeon RX 480 launched last year. AMD will also have a “limited edition” version of the card with the same hardware specifications, but replacing the rubber shroud with a brushed aluminum shroud, very similar to the one found on the Vega Frontier Edition. Though it’s important to note that the only difference between these two cards is the material of the shroud; the cards are otherwise identical, PCBs, performance, cooling systems, and all.

On that note, AMD has only released a limited amount of information on the cooler design of the Vega 64, which is of particular interest as it’s an area where AMD struggled on the R9 290 and RX 480 series. We do know that the radial fan is larger, now measuring 30mm in radius (60mm in diameter). The fan in turn is responsible for cooling a heatsink that’s attached to the Vega 10 GPU + memory package via a vapor chamber, a typical design choice for high performance, high TDP video cards.

Finally, the last member of the RX Vega family is the Radeon RX Vega 56. The obligatory cut-down member of the group, this card gets a partially disabled version of the Vega 10 GPU with only 56 of 64 CUs enabled. On the clockspeed front, this card also sees reduced GPU and memory clockspeeds; the GPU runs at 1156MHz base and 1471MHz boost, while the HBM2 memory runs at 1.6Gbps (for 410GB/sec of memory bandwidth). Following the traditional cut-down card model, this lower performing card is also lower power – and quite possibly the most power efficient RX Vega card – with a 210W TDP, some 85W below the Vega 64. Meanwhile, other than its clockspeed the card’s HBM2 memory is untouched, shipping with the same 8GB of memory as the other RX Vega members.

Moving on, perhaps the burning question for many readers now that they have the specifications in hand is expected performance, and this is something of a murky area. AMD has published some performance slides for the Vega 64, but they haven’t taken the time to extensively catalog what they see as the competition for the card and where the RX Vega family fits into that. Instead, what we’ve been told is to expect the Vega 64 to “trade blows” with NVIDIA’s GeForce GTX 1080.

In terms of numbers, the few numbers that the company has published have focused on minimum framerates over average framerates, opting to emphasize smoothness and the advantage they believe to have over the aforementioned GTX 1080. As always, competitive numbers should be taken with a (large) grain of salt, but for the time being this is the best guidance we have on what to expect for the RX Vega family’s performance.

Otherwise for the Vega 64 Liquid and Vega 56, we don’t have any other performance figures. Expect the former to outperform the air cooled Vega 64 – though perhaps not massively – while the Vega 56 will come in notably lower.

The Corsair Neutron NX500 (400GB) PCIe SSD Review: Big Card, Big Pricetag

Corsair’s recent SSDs have all been based on Phison’s turnkey SSD solutions, where Corsair specifies how the drive will look, but the internals of the drive are essentially identical to those of a dozen other brands. Using turnkey solutions like this is by far the easiest and least risky way for a brand to ship SSDs, but it leaves very little room for product differentiation. Corsair’s Neutron XTi and Force LE SATA drives and their Force MP500 M.2 NVMe SSD don’t offer anything unique under Corsair’s sticker. The new Corsair Neutron NX500 uses the same Phison E7 controller as the MP500, but it aims to stand out from the crowd.

The Corsair Neutron NX500 is not the first retail Phison E7 SSD to use the PCIe add-in card form factor with a heatsink, but it is the first to reserve a very large spare area, leaving just 400GB usable space on our sample compared to the typical 480GB. This kind of high overprovisioning ratio is usually only found on enterprise SSDs intended for write-heavy workloads. We saw these oddball capacities with the Intel SSD 750, but there it was due in part to Intel’s 18-channel controller compared to 4 or 8 channels on most consumer drives. The Corsair NX500 actually has substantially more overprovisioning than the Intel SSD 750.

The custom heatsink makes the Corsiar Neutron NX500 visually quite distinct as it carries typical Corsair styling cues. The PCIe bracket is perforated with triangular vents that match the Corsair ONE’s side panels, while the rest of the drive is decked in variations on black. We know from our past testing of Phison E7 drives that the heatsink’s role is more aesthetic than functional, but as the heaviest SSD heatsink I’ve yet encountered it should guarantee that the controller stays cool. The NX500 does not include any thermal pads between the heatsink and the flash memory, and there are no thermal pads between the drive and the backplate. The faux carbon fiber plastic shroud over part of the NX500’s heatsink could theoretically detract from its cooling capacity, but the wattage of the Phison E7 chip is far too low to for that to matter.

The PCB under the NX500’s heatsink is barely modified from the Phison reference design. It does actually bear Corsair’s name, but the overall layout is identical to all the other Phison E7 PCIe cards we’ve seen, right down to the unpopulated solder pads for power loss protection capacitors—both cylindrical through-hole capacitors and surface-mount solid capacitors are provided for. A custom PCB half the size could have worked without making the board crowded. The flash is the usual Toshiba 15nm MLC. The NX500 is equipped with twice as much DRAM as is typical for a SSD with this much NAND flash.

Quite unsurprisingly given the overprovisioning situation, the Corsair Neutron NX500 comes with a firmware version we have not previously encountered on other Phison E7 products. The NX500 ships with firmware version E7FM04.5, which I’ll abbreviate as version 4.5. We’ve previously dealt with versions 1.0, 2.0 and 2.1, and an upcoming review will feature a 240GB drive using version 3.6.

An NVMe SSD in the PCIe add-in card form factor with a big heatsink and using MLC NAND is obviously a niche product for the high end of the market. It makes sense that Corsair’s starting the NX500 line with 400GB and 800GB capacities while the more mainstream MP500 M.2 SSD ranges from 120GB to 480GB. Corsair rates the NX500 with a total write endurance of 698TB for the 400GB model (the same as their 480GB MP500) and 1396TB for the 800GB model, but the NX500 comes with a five-year warranty compared to the MP500’s three years.

This review has two goals: to compare the NX500’s overprovisioning and other firmware changes against earlier Phison E7 drives, and to compare the NX500 against the broader field of current NVMe SSDs with similar capacities. The other drives considered in this review includes:

Patriot Hellfire M.2 480GB, Phison E7 with firmware version 2.1
Zotac SONIX 480GB, add-in card Phison E7 with firmware version 1.0
Plextor M8PeY 512GB and Toshiba OCZ RD400A 512GB, two M.2 SSDs in add-in card adapters for cooling purposes, both using the same Toshiba 15nm MLC but with controllers other than the Phison E7
Samsung 950 PRO 512GB and 960 EVO 1TB. We don’t have samples of the 512GB 960 PRO or 500GB 960 EVO, so these are the closest Samsung equivalents we can provide at the moment.
WD Black 512GB and Intel SSD 600p 512GB, entry-level M.2 NVMe SSDs using TLC NAND. One of these is usually the cheapest NVMe SSD available at any given moment.
Samsung 850 PRO 512GB, representing the high end of the SATA SSD market

Intel’s ‘New’ 8th Generation Processors are Built on Kaby Lake, Add Additional Cores

Ever since Intel introduced the first-generation Core i7, it’s followed a predictable series numbering. First-generation Core processors were codenamed Nehalem, second-generation CPUs were Sandy Bridge, followed by Ivy Bridge (3rd), Haswell, (4th) and so on. In each case, new chips — whether they were die shrinks or new architectures — were assigned a new product number. Today, with its 8th-generation chips, Intel is explicitly changing that policy. Unlike previous product generations, the 8th-generation launching today family will span multiple chip families built on 14nm+ (Kaby Lake), 14nm++ (Coffee Lake), and 10nm (Cannon Lake).

The four U-series chips Intel is launching today are fundamentally based on Kaby Lake with the same architecture, the same GPU, and almost the same capabilities. The one minor update to the GPU side of the equation is support for HDMI 2.0 and HDCP 2.2, without any need for third-party solutions. So what, exactly, is new about these chips? Two things: Larger core counts, and slightly higher clock speeds at maximum Turbo.

The table above shows how the new Core i7 chips compare against their 7th-generation predecessors at the 15W TDP. Intel is trading a significant amount of base clock speed for core counts, but the maximum turbo speed on these chips is still higher, in some cases, than the parts they replace. The new i5 CPUs aren’t shown here, but they compare identically to the Core i7 mobile parts, with higher turbo clocks, quad core configurations with Hyper-Threading, which is historically unusual in the i5 quad-core lineup. Intel doesn’t put much emphasis on its quad-core i5s, but the 7th generation quad-cores didn’t have Hyper-Threading at all. The new chips do.

Intel’s mobile revamp mirrors product and price changes the company has already introduced. For the past six years, Intel has followed the same basic processor philosophy: In mobile, dual-core processors were the norm at every level, with only a few quad-core / eight-thread chips available. These have always occupied the top of the product stack and typically been offered only in higher TDP brackets. These new chips change all that.

Performance Scaling Still Unknown
There’s a bit of mixed messaging over how much performance these new cores will offer. According to Intel, it expects a 40% overall improvement, with 25% coming from the addition of two more cores. ‘Design’ and ‘Manufacturing’ also add to the total, albeit in smaller amounts. But that’s actually less performance than we’d expect to see, given that doubling core counts from a dual CPU to a quad CPU can drive more than 25% improvement on its own in desktops. Most modern applications scale fairly well, up to four cores / eight threads, and while that’s not an absolute, the 25% figure is still lower than expected.

As you can see, this chart suggests that Intel has set fairly aggressive Turbo Mode clocks for these cores. Just how theoretical these core clocks are remains to be seen. Intel began offering OEMs more flexibility to hit their TDP and performance targets several years ago, but doing so created odd performance dips and spikes. In several cases, the lowest-end Core M you could buy actually yielded better performance than the higher-end chips due to thermal issues. Whether or not that will occur here is something we can’t judge until products have shipped.

What’s Next for 8th-Gen Core?
Intel has announced that it will launch its next generation desktop processors “in fall,” but those of you hoping to drop a six-core i5 or 12-thread i7 into an existing system are out of luck. The Coffee Lake refresh will require 300-series chipsets, and will not be backwards compatible with existing products. Given how little time it’s been since Intel introduced Kaby Lake, the quick hop from the 200-series to the 300-series won’t sit well with people who just upgraded to the 7700K, especially if the desktop 8th-generation cores make six cores available for the same price Intel used to charge for four.

$2.9 Million Pizza-Making Robot Still Can’t Make Pizza

Fear of AI and robotics is fairly common in humans. There have been ample predictions about how the robot/AI revolution will destroy an enormous number of jobs, while potentially posing an existential risk to the long-term survival of the human race. In the real world, however, our robot designs are much closer to a manufacturing robot on an assembly line than, say, Data (or even Bender). Case in point: Rodyman, the $2.9 million robot. For the past four years, Professor Bruno Siciliano and Prisma Lab in Italy have been trying–and not entirely succeeding–to teach a robot how to make pizza.

“Preparing a pizza involves an extraordinary level of agility and dexterity,” Professor Siciliano told Scientific American earlier this summer. Rodyman can put toppings on a pizza, but it has real trouble with the dough, and has yet to master the art of tossing without tearing the dough apart.

This project has a serious goal, despite the odd-seeming task. The entire point of the Rodyman project, as stated by Prisma Lab, is to create a “unified framework for dynamic manipulation where the mobile nature of the robotic system and the manipulation of non-prehensile non-rigid or deformable objects will explicitly be taken into account.”

The Prisma Lab website continues:

Novel techniques for 3D object perception, dynamic manipulation control and reactive planning will be proposed. An innovative mobile platform with a torso, two lightweight arms with multi-fingered hands, and a sensorized head will be developed for effective execution of complex manipulation tasks, also in the presence of humans. Dynamic manipulation will be tested on an advanced demonstrator, i.e. pizza making process, which is currently unfeasible with the prototypes available in the labs. The research results to be achieved in RODYMAN will contribute to paving the way towards enhancing autonomy and operational capabilities of service robots, with the ambitious goal of bridging the gap between robotic and human task execution capability.

The video above shows part of the training process. The video is in Italian and Google’s Auto Translate subtitle feature is truly hilariously terrible, so I recommend watching it without attempting to comprehend the audio. The gentleman making pizza while Rodyman imitates his movements is Enzo Coccia, a highly skilled pizzaiolo (pizza maker). Coccia wears a motion capture suit while the robot observes him and attempts to copy his movements. According to Professor Siciliano, Rodyman has the ability to learn from its mistakes and has improved over time, though it still can’t manage the pizza dough problem.

Rodyman is scheduled to make his debut at the Naples Pizza Festival (officially now the best thing ever) in May of 2018. Hopefully his issues will be ironed out by then. If not, that $2.9 million funding grant from the EU will represent a lot of blown dough.

Biostar X370GTN ITX AM4 Motherboard Review

If you live in North America, chances are that Biostar is not the first name that pops into your head when thinking about motherboard manufacturers. While they have a significant presence in Asia, they haven’t really made a big splash on this continent, partially because they don’t make the flashy high-end motherboards that get everyone’s attention. Thankfully, sometimes you just have to make something unique to get a lot of attention, and that is why we are reviewing the Biostar X370GTN.

Biostar one upped everyone in the industry by not only announcing the first Mini-ITX AM4 motherboard, but by releasing it before anyone else had even announced their versions. Not only that, but at $110 USD / $150 CAD, the X370GTN is quite really affordable too. While we have a lot of experience with Intel-based Mini-ITX that are at least 50% more expensive than this AM4 model, it will be interesting to see what Biostar has been able to create with a low price point and tiny 7″ x 7″ piece of PCB.

When we look at the specs, the first things that stand out are the seven-phase CPU power design and the 4-pin CPU power connector. Could these prove to be limitations when overclocking? We are going to find out. When it comes to connectivity and expansion, this motherboard is quite similar to other pint-sized offerings. There are four SATA 6Gb/s ports, one full-speed PCI-E 3.0 x4 M.2 slot, two full-speed USB 3.1 Gen2 ports, one Type-A and one Type-C, up to six USB 3.0 ports, and one USB 2.0 headers, for a grand total of ten possible USB ports. When it comes to networking, there is one Realtek Dragon-powered gigabit LAN port and no onboard Wi-Fi, which is a slight disappointment but not at all unexpected given the low price point.

As you would expect, there is only one PCI-E x16 slot that will likely house a graphics card if you’re planning to use a Ryzen processor. However, this motherboard also supports AMD’s new seventh generation Bristol Ridge APUs and it will surely also support the upcoming Zen-based Raven Ridge APUs. If you do install an APU, your video output choices will be limited to DVI-D or HDMI 1.4.

The onboard audio solution is based on the Realtek ALC892 ten-channel codec, which is familiar to us since it was Reatek’s high-end audio codec all the way back in 2010-2011. The codec is helped along with a chunky pair of ‘Hi-Fi’ audio capacitors, a headphone sense amplifier, and the whole audio section is protected by a PCB-level isolation line that helps keep noise out of the audio signal. If and when you listen to music, you will able to be able to make the onboard RGB LED lighting dance to the beat. There is not only lighting into the single MOSFET cooler, but there are also two headers on which to attach 5050 RGB LED light strips.

Despite being small, the Biostar X370GTN appears to be competently equipped for its price. However, implementation is everything, and we are going to found whether Biostar had to cut any corners in order to be first to the market.

The Enermax Revolution SFX 650W PSU Review: Compact & Capable

PC gaming parts are constantly getting more powerful, efficient, and affordable, which is making the PC platform a very serious rival of consoles when it comes to living room gaming. Especially since the release of the Pascal GPUs some months ago, which virtually enabled seamless 4K gaming with a single card, the number of users investing in a living room gaming PC has been increasing significantly.

When it comes to living room gaming PCs however, size is a very important factor and may even outweigh that of the cost. A large PC tower is rarely a feasible option, with users demanding small and elegant designs that match the aesthetics of a modern living room. With many reputable manufacturers offering products specifically designed for living room gaming PCs, the selection of a proper case and peripherals is not an issue. The selection of internal components however can become an ordeal, as compact cases often have numerous limitations.

In order to conserve space, either in order to make the design even smaller or to make room for other components, many compact case designs are nowadays making use of SFX power supply units. It would be impossible a few years back to power a powerful gaming system with an SFX PSU, but more recent designs are making use of more efficient energy conversion platforms and components, allowing them to reach power outputs that were unheard of for SFX PSUs a few years ago.

This year several manufacturers have released high output SFX designs, and Enermax is one of the most prominent names. The company announced the Revolution SFX units back in December, highlighting their cost-effective design and full SFX compatibility. The Revolution SFX units are available in just two variations, the ERV550SWT and the ERV650SWT, with a maximum power output of 550W and 650W respectively. In this review we are having a close look at the more powerful 650W version.

Enermax went with a minimalistic, dark design for the packaging of their SFX PSUs, with icons highlighting its most important features. The thick walls of the cardboard box and the polystyrene foam pieces inside offer more than ample shipping protection to the small unit.

Inside the box we found a luxurious bundle that we rarely encounter even with top tier products. Enermax supplies a typical manual, four black mounting screws, a SFX to ATX adapter for the installation of the PSU in an ATX-compliant case, two long and two short cable management straps (one of each is red and the other is black), and finally a “limited edition” Bluetooth speaker (the color is random). The small speaker is not very powerful or clear, but it is an interesting (if unusual) small gift to have.

Patriot Publishes List of AMD Ryzen Compatible DIMMs: Up to DDR4-3400, 64 GB

Patriot has published a list of its memory modules that are verified and compatible with AMD Ryzen processors. This includes the Viper 4 and Viper Elite modules that are already on the market, and the announcement was made after the company ran extensive tests of its DDR4 DIMMs on different platforms supporting AMD’s latest CPUs.

As previously reported, with AMD’s release of its Zen based CPUs a few months ago, there were some growing pains in the new platform, particularly with RAM speed and compatibility. As it turned out, not all high-end DDR4 memory modules (at the time) would work with AMD Ryzen processors at their labeled data transfer rates. As a result, a number of DDR4 DIMM suppliers have released modules specifically qualified for enthusiast-grade AMD Ryzen-based systems and factory tested for compatibility. Moreover, AMD is working with motherboard makers to improve compatibility of its Ryzen platforms with memory modules via BIOS updates, recently promoting its AGESA update. In the meantime, end users are advised to get DDR4 DIMMs that are labeled for AM4 to ensure compatibility – these modules should be factory-tested to be compatible with the AMD Ryzen.

Patriot has tested dozens of its single unit DIMMs, and as dual-/quad channel kits, with multiple motherboards from ASUS, ASRock, GIGABYTE and MSI based on AMD’s X370, B350 and B320 chipsets (see the details in the table below) for compatibility with AMD Ryzen 7 and Ryzen 5 CPUs. Among the tested modules are Patriot’s Viper 4 and Viper Elite DIMMs with 4 GB, 8 GB and 16 GB capacities rated to operate at 2133-3400 MT/s with CL15 and CL16 timings. The company published its list of AMD Ryzen-compatible DDR4 DIMMs and we republish it below.

All of these modules are already on the market in single-, dual-, and quad-channel configurations using capacities from 8 GB to 64 GB. With this list, it should be easy to find out about compatibility of Patriot’s Viper 4 and Viper Elite with AMD’s latest chips by checking out their model numbers.

AMD Releases Bristol Ridge to Retail: AM4 Gets APUs

The focus for AMD’s AM4 platform is to span a wide range of performance and price points. We’ve had the launch of the Ryzen CPU family, featuring quad cores up to octa-cores with the new Zen microarchitecture, but AM4 was always designed to be a platform that merges CPUs and integrated graphics. We’re still waiting for the new Zen cores in products like Ryzen to find their way down into the desktop in the form of the Raven Ridge family, however those parts are going through the laptop stack first and will likely appear on the desktop either at the end of the year or in Q1 next year. Until then, users get to play with Bristol Ridge, originally released back in September 2016, but finally making its way to retail.

First the OEMs, Now Coming To Retail
Back in 2016, AMD released Bristol Ridge to OEMs only. These parts were the highest performing iteration of AMD’s Bulldozer design, using Excavator v2 cores on an AM4 motherboard and using DDR4. We saw several systems from HP and others that used proprietary motherboard designs (as the major OEMs do) combined with these CPUs at entry level price points. For example, a base A12-9800 system with an R7 200-series graphics card was sold around $600 at Best Buy. Back at launch, Reddit user starlightmica saw this HP Pavilion 510-p127c in Costco:

$600 gets an A12-9800, 16GB of DDR4, a 1TB mechanical drive, an additional R7 2GB graphics card, 802.11ac WiFi, a DVDRW drive, and a smattering of USB ports.

Initially AMD’s focus on this was more about B2B sales. AMD’s reasoning for going down the OEM only route was one of control and marketing, although one might suggest that by going OEM only, it allowed distributors to clear their stocks of the previous generation APUs before Ryzen hit the shelves.

Still, these were supposed to be the highest performing APUs that AMD has ever made, and users still wanted a piece of the action. If you were lucky, a part might pop up from a broken down system on eBay, but for everyone else, the question has always been when AMD would make them available through regular retail channels. The answer is today, with a worldwide launch alongside Ryzen 3. AMD states that the Bristol Ridge chips aren’t designed to be hyped up as the biggest thing, but fill in the stack of CPUs below $130, an area where AMD has had a lot of traction in the past, and still provide the best performance-per-dollar APU on the market.

The CPUs
The eight APUs and three CPUs being launched f spans from a high-frequency A12 part to the A6, and they all build on the Bristol Ridge notebook parts that were launched in 2016. AMD essentially skipped the 6th Gen, Carrizo, for desktop as the Carrizo design was significantly mobile focused (for Carrizo we ended up with one CPU, the Athlon X4 845 (which we reviewed), with DDR3 support but no integrated graphics). Using the updated 28nm process from TSMC, AMD was able to tweak the microarchitecture and allow full on APUs for desktops using a similar design.

AMD’s new entry-level processors will hit a maximum of 65W in their official thermal design power (TDP), with the launch offering a number of 65W and 35W parts. There was the potential to offer CPUs with a configurable TDP, as with previous APU generations, however much like the older parts that supported 65W/45W modes, it was seldom used, and chances are we will see system integrators stick with the default design power windows here. Also, the naming scheme: any 35W part now has an ‘E’ at the end of the processor name, allowing for easier identification.

Back when these CPUs were first launched, we were able to snag a few extra configuration specifications for each of the processors, including the number of streaming processors in each, base GPU frequencies, base Northbridge frequencies, and confirmation that all the APUs launched will support DDR4-2400 at JEDEC sub-timings.

The A12-9800 at the top of the stack is an interesting part on paper. If we do a direct comparison with the previous high-end AMD APUs, the A10-7890K, A10-7870K and A10-7860K, a lot of positives end up on the side of the A12.

The frequency of the A12-9800 gives it a greater dynamic range than the A10-7870K (having 3.8-4.2 GHz, rather than 3.9-4.1), but with the Excavator v2 microarchitecture, improved L1 cache, AVX 2.0 support and a much higher integrated graphics frequency (1108 MHz vs. 866 MHz) while also coming in at 30W less TDP. The 30W TDP jump is the most surprising – we’re essentially getting better than the previous A10-class performance at a lower power, which is most likely why they started naming the best APU in the stack an ‘A12’. Basically, the A12-9800 APU will be an extremely interesting one to review given the smaller L2 cache but faster graphics and DDR4 memory.

One thing users will notice is the PCIe support: these Bristol Ridge APUs only have PCIe 3.0 x8 for graphics. This means that most X370 motherboards that have two GPU slots will leave the second slot useless. AMD suggests moving to B350 instead, which only allows one add-in card.

The Integrated GPU
For the A-series parts, integrated graphics is the name of the game. AMD configures the integrated graphics in terms of Compute Units (CUs), with each CU having 64 streaming processors (SPs) using GCN 1.3 (aka GCN 3.0) architecture, the same architecture as found in AMD’s R9 Fury line of GPUs. The lowest processor in the stack, the A6-9500E, will have four CUs for 256 SPs, and the A12 APUs will have eight CUs, for 512 SPs. The other processors will have six CUs for 384 SPs, and in each circumstance the higher TDP processor typically has the higher base and turbo frequency.