Our Interesting Call with CTS-Labs

In light of the recent announcement of potential vulnerabilities in Ryzen processors, two stories have emerged. Firstly, that AMD processors could have secondary vulnerabilities in the secure processor and ASMedia chipsets. The second story is behind the company that released the report, CTS-Labs, the approach they have about this disclosure, and the background of this previously unknown security focused outfit – and their intentions as well as their corporate structure. Depending on the angle you take in the technology industry, either as a security expert, a company, the press, or a consumer, one of these stories should interest you.

In our analysis of the initial announcement, we took time to look at what information we had on the flaws, as well as identifying the number of key features about CTS-Labs that did not fit our standard view of a responsible disclosure as well as a few points on Twitter that did not seem to add up. Since then, we have approached a number of experts in the field, a number of companies involved, and attempted to drill down into the parts of the story that are not so completely obvious. I must thank the readers that reached out to me over email and through Twitter that have helped immensely in getting to the bottom of what we are dealing with.

On the back of this, CTS-Labs has been performing a number of press interviews, leading to articles such as this at our sister site, Tom’s Hardware. CTS reached out to us as well, however a number of factors led to delaying the call. Eventually we found a time to suit everyone. It was confirmed in advance that everyone was happy the call was recorded for transcription purposes.

Joining me on the call was David Kanter, a long-time friend of AnandTech, semiconductor industry consultant, and owner of Real World Technologies. From CTS-Labs, we were speaking with Ido Li On, CEO, and Yaron Luk-Zilberman, CFO.

The text here was transcribed from the recorded call. Some superfluous/irrelevant commentary has been omitted, with the wording tidied a little to be readable.

This text is being provided as-is, with minor commentary at the end. There is a substantial amount of interesting detail to pick through. We try to tackle both of the sides of the story in our questioning.

IC: Who are CTS-Labs, and how did the company start? What are the backgrounds of the employees?

YLZ: We are three co-founders, graduates of a unit called 8200 in Israel, a technological unit of intelligence. We have a background in security, and two of the co-founders have spent most of their careers in cyber-security and working as consultants for the industry performing security audits for financial institutions and defense organizations and so on. My background is in the financial industry, but I also have a technological background as well.

We came together in the beginning of 2017 to start this company, whose focus was to be in hardware in cyber security. As you guys probably know, this is frontier/niche now that most of the low-hanging fruit in software has been picked up. So this is where the game is moving, we think at least. The goal of the company is to provide security audits, and to deliver reports to our clients on the security of those points.

This is our first major publication. Mostly we do not go public with our results, we just deliver our results to our customers. I should say very importantly that we never deliver the vulnerabilities themselves that we find, or the flaws, to a customer to whom the product does not belong. In other words, if you come to us with a request for an audit of your own, we will give you the code and the proof-of-concepts, but if you want us to audit someone else’s product, even as a consumer of a product or a competitor’s product, or a financial institution, we will not give you the actual code – we will only describe to you the flaw that we find.

This is our business model. This time around in this project, we started with ASMedia, and as you probably know the story moved to AMD as they imported the ASMedia technology into their chipset. Having studied one we started studying the other. This became a very large and important project so we decided we were going to go public with the report. That is what has brought is here.

IC: You said that you do not provide flaws to companies that are not the manufacturer of what you are testing. Does that mean that your initial ASMedia research was done with ASMedia as a customer?

ILO: No. So we can audit a product that the manufacturer of the product orders from us, or that somebody else such as a consumer or a third interested party audits from us and then we will provide the part of the description about the vulnerabilities much like our whitepaper but without the technical details to actually implement the exploit.

Actually ASMedia was a test project, as we’re engaged in many projects, and we were looking into their equipment and that’s how it started.

IC: Have you, either professionally or as a hobby, published exploits before?

ILO: No we have not. That being said, we have been working in this industry for a very long time as we have done security audits for companies, found vulnerabilities, and given that information to the companies as part of consultancy agreements but we have never actually went public with any of those vulnerabilities.

IC: What response have you had from AMD?

ILO: We got the email today to say they were looking into it.

DK: If you are not providing Proof of Concept (PoC) to a customer, or technical details of an exploit, with a way to reproduce it, how are you validating your findings?

YLZ: After we do our validation internally, we take a third party validator to look into our findings. In this case it was Trail of Bits, if you are familiar with them. We gave them full code, full proof of concept with instructions to execute, and they have verified every single claim that we have provided to them. They have gone public with this as well.

In addition to that, In this case we also sent our code to AMD, and then Microsoft, HP, and Dell, the integrators and also domestic and some other security partners. So they have all the findings. We decided to not make them public. The reason here is because we believe it will take many many months for the company, even under ideal circumstances, to come out with a patch. So if we wanted inform consumers about the risks that they have on the product, we just couldn’t afford in our minds to not make the details public.

DK: Even when the security team has a good relationship with a company who has a product with a potential vulnerability, simply verifying a security a hole can take a couple of days at least. For example, with the code provided with Spectre, a security focused outsider could look at the code and make educated guesses within a few minutes to the validity of the claim.

ILO: What we’ve done is this. We have found thirteen vulnerabilities, and we wrote a technical write up on each one of those vulnerabilities with code snippets how they work exactly. We have also produced working PoC exploits for each one of the vulnerabilities so you can actually exploit each one of them. And we have also produced very detailed tutorials on how to run the exploits on test hardware step-by-step to get all the results that we have been able to produce here in the lab. We documented it so well that when we gave it to Trail of Bits, they took it, and ran the procedures by themselves without talking to us and reproduced every one of the results.

We took this package of documents, procedures, and exploits, and we sent it to AMD and other security process that took Trail of Bits about 4-5 days to complete, so I am very certain that they will be able to reproduce this. Also we gave them a list of exactly what hardware to buy and instructions with all the latest BIOS updates and everything.

YLZ: We faced the problems – how do we make a third party validator not just sit there and say ‘this thing works’ but actually do it themselves without us contacting them. We had to write a details manual, a step-by-step kind of thing. So we gave it to them, and Trail of Bits came back to us in five days. I think that the guys we sent it to are definitely able to do it within that time frame

IC: Can you confirm that money changes hands with Trail of Bits?

(This was publicly confirmed by Dan Guido earlier, stating that they were expecting to look at one test out of curiosity, but 13 came through so they invoiced CTS for the work. Reuters reports that a $16000 payment was made as ToB’s verification fee for third-party vulnerability checking)

YLZ: I would rather not make any comments about money transactions and things of that nature. You are free to ask Trail of Bits.

IC: The standard procedure for vulnerability disclosure is to have a CVE filing and a Mitre numbers. We have seen in the public disclosures, even 0-day and 1-day public disclosures, have relevant CVE IDs. Can you describe why you haven’t in this case?

ILO: We have submitted everything we have to US Cert and we are still waiting to hear back from them.

IC: Can you elaborate as to why you did not wait for those numbers to come through before going live?

ILO: It’s our first time around. We haven’t – I guess we should have – this really is our first rodeo.

IC: Have you been I contact with ARM or Trustonic about some of these details?

ILO: We have not, and to be honest with you I don’t really think it is their problem. So AMD uses Trustonic t-Base as the base for their firmware on cpu intel. But they have built quite a bit of code on top of it and in that code are security vulnerabilities that don’t have much to do with Trustonic t-Base. So we really don’t have anything to say about T-Base.

IC: As some of these attacks go through TrustZone, an Arm Cortex A5, and the ASMedia chipsets, can you speak about other products with these features can also be affected?

ILO: I think that the vulnerabilities found are very much … Actually let us split this up between the processor and the chipset as these are very different.

For the secure processor, AMD built quite a thick layer on Trustonic t-Base. They added many features and they also added a lot of features that break the isolation between process running on top of t-Base. So there are a bunch of vulnerabilities there that are not from Trustonic. In that respect we have no reason to believe that we would find these issues on any other product that is not AMDs.

Regarding the chipset, there you actually have vulnerabilities that affect a range of products. Because as we explained earlier, we just looked first at AMD by looking at ASMedia chips. Specifically we were looking into several lines of chips, one of them is the USB host controller from ASMedia. We’re talking about ASM1042, ASM1142, and the recently released ASM1143. These are USB host controllers that you put on the motherboard and they connect on one side with PCIe and on the other side they give you some USB ports.

What we found are these backdoors that we have been describing that come built into the chips – there are two sets of backdoors, hardware backdoors and software backdoors, and we implemented clients for those backdoors. The client works on AMD Ryzen machines but it also works on any machine that has these ASMedia chipsets and so quite a few motherboards and other PCs are affected by these vulnerabilities as well. If you search online for motherboard drivers, such as the ASUS website, and download ASMedia drivers for your motherboard, then those motherboards are likely vulnerable to the same issues as you would find on the AMD chipset. We have verified this on at least six vendor motherboards, mostly the Taiwanese manufacturers. So yeah, those products are affected.

IC: On the website, CTS-Labs states that the 0-day/1-day way of public disclosure is better than the 90-day responsible disclosure period commonly practiced in the security industry. Do you have any evidence to say that the paradigm you are pursuing with this disclosure is any better?

YLZ: I think there are pros and cons to both methods. I don’t think that it is a simple question. I think that the advantage of the 30 to 90 days of course is that it provides an opportunity for the vendor to consider the problem, comment on the problem, and provide potential mitigations against it. This is not lost on us.

On the other hand, I think that it also gives the vendors a lot of control on how it wants to address these vulnerabilities and they can first deal with the problem then come out with their own PR about the problem, I’m speaking generally and not about AMD in particular here, and in general they attempt to minimize the significance. If the problem is indicative of a widespread issue, as is the case with the AMD processors, then the company will company probably would want to minimize it and to play it down.

The second problem is that if mitigations are not available in the relevant timespan, this paradigm does not make much sense. You know we were talking to experts about the potential threat to these issues, and some of them are in the logic segment, ASICs, and so there is no obvious direct patch that can be developed for a workaround. This may or may not be available. Then the other one requires issuing a patch in the firmware and then going through the QA process, and typically when it comes to processors, QA is a multi-month process.

I estimate it will be many many months before AMD is able to patch these things. If we had said to them, let’s say, ‘you guys have 30 days/90 days to do this’ I don’t think it would matter very much and it would still be irresponsible on our part to come out after the period and release the vulnerabilities into the open.

So basically the choice that we were facing in this case was either we not tell the public and let the company fix it possibly and only then give it to the public and disclose, and in this circumstance we would have to wait, in our estimate, as much as a year, meanwhile everyone is using the flawed product. Or alternatively we never disclose the vulnerabilities, give it to the company, and then disclose at the same time we are giving it to the company so that the customers are aware of the risks of those products and can decide whether to buy and use them, and so on.

In this case we decided that the second option is the more responsible one, but I would not* say that in every case that this is the better method. But that is my opinion. Maybe Ilia (CTO) has a slightly different take on that. But these are my concerns.

*Editor’s Note: In our original posting, we missed out the ‘not’ which negates the tone of this sentence. Analysis and commentary have been updated as a result.

IC: Would it be fair to say that you felt that AMD would not be able to mitigate these issues within a reasonable time frame, therefore you went ahead and made them public?

YLZ: I think that is a very fair statement. I would add that we saw that it was big enough of an issue for the consumer had the right to know about them.

IC: Say, for example, CTS-Labs were in charge of finding Meltdown and Spectre, you would have also followed the same path of logic?

YLZ: I think that it would have depended on the circumstances of how we found it, how exploitable it was, how reproducible it was. I am not sure it would be the case. Every situation I think is specific.

HGST Deskstar NAS 4 TB Review

The traditional market for hard drives (PCs and notebooks) is facing a decline due to the host of advantages provided by SSDs. However, the explosion in the amount of digital content generated by households and businesses has resulted in the rapid growth of the SMB / SOHO / consumer NAS market. Hard drive vendors have jumped on to this opportunity by tweaking the firmware and manufacturing process of their drives to create lineups specifically suited for the NAS market.

We have already had comprehensive coverage of a number of 4 TB NAS drives and a few 6 TB ones. One of the drives that we couldn’t obtain in time for our initial 4 TB roundup was the HGST Deskstar NAS. After getting sampled last month, we put the 4 TB version of the HGST Deskstar NAS through our evaluation routine for NAS drives. While most of our samples are barebones, HGST sampled us their retail kit, which includes mounting screws and an installation guide.

The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while comparing the HGST Deskstar NAS against other drives targeting the NAS market. The list of drives that we will be looking at today is listed below.

e3 1200 v2
HGST Deskstar NAS (HDN724040ALE640)
WD Red Pro (WD4001FFSX-68JNUN0)
Seagate Enterprise Capacity 3.5″ HDD v4 (ST4000NM0024-1HT178)
WD Red (WD40EFRX-68WT0N0)
Seagate NAS HDD (ST4000VN000-1H4168)
WD Se (WD4000F9YZ-09N20L0)
Seagate Terascale (ST4000NC000-1FR168)
WD Re (WD4000FYYZ-01UL1B0)
Seagate Constellation ES.3 (ST4000NM0033-9ZM170)
Toshiba MG03ACA400
HGST Ultrastar 7K4000 SAS (HUS724040ALS640)
Prior to proceeding with the actual review, it must be made clear that the above drives do not target the same specific market. For example, the WD Red and Seagate NAS HDD are for 1- 8 bay NAS systems in the tower form factor. The WD Red Pro is meant for rackmount units up to 16 bays, but is not intended to be a replacement for drives such as the WD Re. Seagate Constellation ES.3, Seagate Enterprise Capacity v4 and the Toshiba MG03ACA400 which target enterprise applications requiring durability under heavy workloads. The WD Se and the Seagate Terascale target the capacity-sensitive cold storage / data center market.

The HGST Deskstar NAS is supposed to slot in-between the WD Red and the WD Red Pro. It doesn’t specify an upper limit on the number of bays, but mentions only desktop form factor systems. Like other NAS drives, it is rated for 24×7 operation and includes a rotational vibration sensor for increased reliability.

Testbed Setup and Testing Methodology
Our NAS drive evaluation methodology consists of putting the units to test under both DAS and NAS environments. We first start off with a feature set comparison of the various drives, followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. In the same PC, we also evaluate the performance of the drive using some aspects of our direct attached storage (DAS) testing methodology. For evaluation in a NAS environment, we configure three drives of each model in a RAID-5 volume and process selected benchmarks from our standard NAS review methodology. Since our NAS drive testbed supports both SATA and SAS drives, but our DAS testbed doesn’t, only SATA drives are subject to the DAS benchmarks.

We used two testbeds in our evaluation, one for benchmarking the raw drive and DAS performance and the other for evaluating performance when placed in a NAS unit.

In the above testbed, the hot swap bays of the Corsair Air 540 have to be singled out for special mention.
They were quite helpful in getting the drives processed in a fast and efficient manner for benchmarking. For NAS evaluation, we used the QNAP TS-EC1279U-SAS-RP. This is very similar to the unit we reviewed last year, except that we have a slightly faster CPU, more RAM and support for both SATA and SAS drives.

The NAS setup itself was subjected to benchmarking using our standard NAS testbed.

Synology Launches ARM-based DS1515 and RS815 Value Series NAS Units

Synology introduced their x15+ series in the second half of 2014. The models were all based on the Intel Rangeley platform (x86). It is now time for a refresh of the Value Series using ARM-based SoCs – the x15 models. There are two units being introduced today, the 5-bay DS1515 in the tower form factor (MSRP: $650) and the 4-bay RS815 in a new short-depth rackmount form factor (MSRP: $600).

Based on the Annapurna Labs Alpine AL-314 quad-core Cortex-A15 SoC, the unit has four GbE LAN ports, two USB 3.0 and two eSATA ports. The eSATA ports can be used to connect up to two DX513 / DX213 expansion units. This can provide up to a maximum of 10 bays additional to the five on the main unit. The DS1515 comes with 2GB of RAM.

The unit comes with Synology’s widely respected DiskStation Manager DSM 5.2 OS supporting a wide variety of networking protocols, applications and add-on packages. The AL-314 SoC comes with hardware encryption engines and a dedicated floating point unit. The presence of four LAN ports help in setting up a high-performance high-availability cluster. Claimed throughput numbers indicate up to 403.7 MBps reads and 421.8 MBps writes.

The RS815 solution is internally the same as the RS814 introduced last year. Carrying the same Marvell ARMADA XP MV78230 that we reviewed a couple of years back in the LenovoEMC ix4-300d, the performance numbers come in at 216.7 MBps reads and 121.8 MBps writes. The eSATA port allows the connection of a RX415 expansion unit. This can provide up to a maximum of 4 bays additional to the four on the main unit.

The important update is the short-depth chassis design. The RS815 is only 29 cm deep, compared to the RS814’s 46 cm. This allows for denser deployments and compatibility with industrial server environments.

It is refreshing to see Synology continuing to invest in ARM-based models for the Value Series. While these are not true 64-bit solutions yet, they will ensure that Intel x86-based solutions are not the only game in town for high-performance NAS units. The presence of four GbE ports on the DS1515 brings a host of exciting use-cases to the table. The Alpine platform might just about cut the grade for 10G NAS units, but it should be an excellent choice for NAS units with GbE LAN ports. The RS815, on the other hand, takes a tried and tested platform and fits it in a new chassis to expand its application areas.

QNAP Begins to Ship AMD Ryzen-Based TS-x77 Series NAS: 6, 8, 12 Bays

QNAP on Wednesday said that it had begun to ship its NAS devices based on AMD’s Ryzen processors. The new TS-677, TS-877 and TS-1277 NAS feature six, eight, or twelve hard drive bays and support SSD caching to maximize performance. The company positions the new NAS for various demanding applications, including VDI, private cloud, virtualization, containerized applications and so on.

QNAP introduced its TS-x77-series NAS at Computex in mid-2017 and became the first supplier of such products to adopt the AMD Ryzen platform. The company explained that high core count, strong integer performance, versatile PCIe support, AES-NI support, competitive pricing, and other factors made CPU a good choice for NAS. Integrated capabilities of the Ryzen platform enabled QNAP to support two M.2-22110 PCIe 3.0 x4 slots for caching SSDs, two PCIe 3.0 x4 slots as well as a PCIe 3.0 x8 slot for 10GbE/40GbE NICs, PCIe NVMe SSD, graphics and other expansion cards in all three models of the TS-x77 series.

The top-of-the-range 12-bay QNAP TS-1277 NAS is based on AMD’s eight-core Ryzen 7 1700 CPUs and come with 64 GB of DDR4 memory in order to handle various applications. Meanwhile, mid-range and entry-level models featuring 12, 8 or 6 bays use the six-core AMD Ryzen 5-1600 and come with 16 GB or 8 GB of memory (see the table below for specifications of the U.S. versions, for others it makes sense to check out the original news story from Computex). All the QNAP TS-x77-series NAS support RAID 50/60 as well as Qtier 2.0 IO Aware features for SSD tiered storage.

The new NAS will run QNAP’s QTS 4.3 operating system and therefore will support the same capabilities as other NAS from the manufacturer. In addition, the QTS 4.3 supports various specially designed applications. The TS-x77 devices are also virtualization ready for VMware, Citrix, Microsoft Hyper-V and Windows Server 2012 R2 environments with the support of iSER (iSCSI Over RDMA). The powerful CPUs inside enable the NAS to host virtual machines and run various applications.

Originally, QNAP promised to ship the TS-x77 series NAS sometimes in Q3, but then delayed them to November, so expect the products to be available from retailers in the coming couple of weeks. Given the positioning, the new TS-677, TS-877 and TS-1277 NAS from QNAP are not going to be cheap. The manufacturer did not reaffirm pricing of the new units in its recent press release, but based on claims made earlier this year, the most affordable TS-677 (Ryzen 5 1600, 8 GB DDR4, no drives) will retail for $1699, whereas the high-end TS-1277 (Ryzen 7 1700, 64 GB, no drives) will cost $3599. Actual specs and MSRPs may vary by region and fully-populated NAS will naturally cost more.

VIA Apollo KX133 Athlon Chipset – Part 1

Be sure to read Part 2 of our KX133 Review for more information on the chipset’s performance.
The slow demise of the Socket-7 platform quite possibly summed up a period in time when the hardware enthusiast was given the most choices when putting together a system. At the peak of the platform’s existence, there were three major CPU manufacturers producing processors for Socket-7 motherboards, there were solutions available in both AT and ATX form factors, and from a chipset standpoint, the platform had three chipset solutions from Intel and another three from VIA.

That theme of variety from the old Socket-7 days has long since been abandoned; until well into the establishment of the Slot-1 platform, all chipsets manufactured were made by Intel. That same trend seemed to be mirrored with the introduction of AMD’s Athlon late last year. One of the worries for the success of the Athlon that we shared at AnandTech was platform chipset support. While AMD announced that both ALi and VIA would have solutions ready for the Athlon, as launch time approached, it quickly became obvious that neither the ALi or VIA solutions would be ready for the release of the Athlon.

So what chipset would the Athlon launch with? AMD had done all of their internal testing and tweaking using their own in-house developed chipset, internally known as the Irongate chipset but commonly known to us as the AMD 750 chipset. The AMD 750 boasted AGP 2X and PC100 SDRAM support courtesy of the AMD 751 North Bridge as well as Ultra ATA 66 courtesy of the AMD 756 South Bridge.

It wasn’t too long before Athlon based motherboards began shipping with hybrids of the AMD chipset and VIA’s upcoming solution. Motherboards like the ASUS K7M and FIC SD-11 featured AMD’s 751 North Bridge but VIA’s 686A South Bridge, in order to move away from using AMD as a chipset supplier.

Last November, we were told that VIA’s upcoming Athlon chipset, the Apollo KX133, was already complete and they were hard at work with motherboard manufacturers to make sure that the delicate implementation of the chipset was handled properly. This would help to eliminate any of the motherboard problems that the first wave of Athlon boards based on the AMD 750 chipset so regretfully boasted.

Finally, on January 10 of this year, VIA announced that they had begun volume shipping of the “first independently developed chipset to support the AMD Athlon processor” known to all of us as the KX133. The release of the KX133 puts VIA in the position of a great monopoly in the Athlon market since motherboard manufacturers will refrain from producing many (if any at all) AMD 750 based solutions and since ALi’s Athlon solution has yet to be seen other than behind a glass display case at last year’s Fall Comdex.

With the exception of a few Athlon motherboards that were being developed with the AMD 750 in mind, all Athlon motherboards that will be shipping from manufacturers that have yet to enter the Athlon motherboard market will be KX133 based solutions. It won’t be long before the AMD 750 disappears from the market and VIA assumes the role of exclusive Athlon chipset provider for the time being. Scary thought?

It shouldn’t be. VIA has never been known to abuse their power during the times when they have been given the upper hand in a market (i.e. Super7 market), the only question is, in spite of VIA’s history, can the KX133 step up to the plate and offer performance and compatibility (the latter being a weak point in VIA’s history) superior to that of the AMD 750?

VIA Apollo Pro 266: The P3 gets DDR

Double Data Rate SDRAM was the talk of the town by the end of 2000, and now that it is here, the market can’t find enough ways to put it to use. There is a tremendous backlash against the business practices of Rambus, the chief proprietor of DDR’s closest competitor, and it seems as if a lot of the support for DDR grew out of disgust for Rambus.

In synthetic performance tests and forward looking benchmark comparisons such as those we have performed under SPEC CPU2000, DDR SDRAM definitely has a bright future. However in terms of offering tangible performance benefits today, the only advantage DDR SDRAM offers over RDRAM is that it isn’t any slower than PC133 SDRAM in everyday applications and games.

There is no doubt about it though, as we do begin to see newer applications and games hit the market, there will be a much greater demand for a higher bandwidth memory solution. With this in mind, the scene is now set for DDR technology to have a banner year in 2001. The industry has been asking for DDR technology to be brought into the system memory market and now that it is here there is the question of how to promote it.

Currently the price gap that exists between DDR SDRAM and PC133 SDRAM is no where near the 8 – 10x levels that PC800 RDRAM was at just a year ago, but that doesn’t mean that the price gap is acceptable. Companies like Crucial, in an attempt to gain some control in the market, have already begun offering PC1600 DDR SDRAM at price points identical to what you can find PC133 SDRAM at. Twice the bandwidth at the same cost, a very good marketing slogan if you ask us.

Where there is memory, there are also platforms to take advantage of it. AMD kicked off the DDR bandwagon late last year with the release of their 760 chipset. A combination of its 133MHz DDR FSB and PC2100 DDR SDRAM support resulted in a 10 – 15% performance gain in present day applications and games. There were definitely cases where the performance improvements grew even beyond those figures, but for today’s user, the performance improvement was generally within that range.

A somewhat disappointing ALi release followed the AMD 760, and although the MAGiK1 chipset has done some maturing since we first took a look at it, it is still in its relative infancy. Interestingly enough, throughout the end of 2000, VIA was missing from the DDR scene. Although they officially announced their first DDR chipset in September of 2000, we did not see any motherboards based on the Apollo Pro 266 back then.
Instead of promoting DDR platforms, VIA wisely chose to focus on what was important to them in the market, producing and shipping platforms that were currently in great demand. This brought the introduction of the KT133A chipset, which although won’t offer the security of high future performance, it does present the best price/performance match for the Athlon right now. We also saw the long awaited release of the KM133 chipset from VIA as well, the first value PC solution directed at the Duron market from the manufacturer that had supported the Socket-A platform so well since its introduction last June.

While VIA’s Socket-A platforms might be the most talked about, their Socket-370 chipsets are what paved the way for them to gain the market share they currently enjoy. It wasn’t surprising, then, that VIA’s first DDR platform would be directed at the Socket-370 market because at the end of the day, unlike many of the dot-coms of recent history, VIA’s intent was to produce a profit. This isn’t to say that the Socket-A platform isn’t profitable, but VIA felt that pursuing the Socket-370 market first made the most sense since they are potentially dealing with a larger volume of sales; not to mention that VIA pretty much had no competition in the Socket-A market to worry about.

With that said, in September of 2000 came VIA’s announcement of their first DDR SDRAM capable chipset: the Apollo Pro 266.

AMD Launches Radeon R9 380X: Full-Featured Tonga at $229 for the Holidays

Back in September of 2014 AMD released their first Graphics Core Next 1.2 GPU, Tonga, which was the GPU at the heart of the Radeon R9 285. For all intents and purposes Tonga was the modern successor to AMD’s original GCN GPU, Tahiti, packing in the same 32 CUs and 32 ROPs, while other features such as color compression allowed AMD to trim the memory bus to 256-bits wide without a performance hit. With Tahiti slowly going out of date from a feature perspective, Tonga was an interesting and unprecedented mid-cycle refresh of a GPU.

However in the 14 months since the launch of the first Tonga product AMD has never released a fully enabled desktop SKU, until now. Radeon R9 285 utilized a partially disabled Tonga – only 28 of 32 CUs were enabled – and while it was refreshed as the Radeon R9 380 as part of the Radeon 300 series launch, a fully enabled version of Tonga only showed up in mobile, where in the form of the R9 M295X it was used in the 27” iMac. In its place AMD continued selling the Tahiti based Radeon R9 280 series for much longer than we would have expected, leading to an atypical situation for AMD where a card using the fully enabled GPU is only now showing up over a year later. In some ways Radeon R9 380X is a card we were starting to think we’d never see.

But at last a full-featured Tonga is here as the heart of AMD’s latest video card, the Radeon R9 380X. AMD is launching the 380X at this time to setup their product stack for the holidays, looking to shake up the market shortly before Black Friday and dig out a spot in the gap between NVIDIA’s GeForce GTX 970 and GTX 960 cards. By hitting NVIDIA a bit above the ever-popular $200 spot, AMD is aiming to edge out NVIDIA on price/performance while also snagging gamers looking to upgrade from circa 2012 video cards.

Starting as always from a specification comparison, the R9 380X is going to be a very straightforward card. Rather than R9 380’s 28 CUs, all 32 CUs are enabled for R9 380X. As this was the only thing disabled on R9 380, this means that the increased stream processors and texture resources are the only material GPU change as opposed to the R9 380. Otherwise we’re still looking at the same 32 ROPs backed by a 256-bit memory bus, all clocked at 970MHz.

Meanwhile as far as memory goes, the R9 380X sees AMD raise the default memory configuration from 2GB for the R9 380 to 4GB for this card. We’ve reached the point where 2GB cards are struggling even at 1080p – thanks in large part to the consoles and their 8GB of shared memory – so to see 4GB as the base configuration is a welcome change. R9 380 did offer both 2GB and 4GB, but as one might expect, 2GB was (and still is) the more common SKU that for better or worse makes R9 380X stand apart from its older sibling even more. Otherwise the 5.7Gbps memory clockspeed of the R9 380X is a slight bump from 5.5Gbps of the 2GB R9 380, though it should be noted that 5.7Gbps was also the minimum for the 4GB R9 380 SKUs. So in practice just as how there’s no increase in the GPU clockspeed, there’s no increase in the memory clockspeed (or bandwidth) with 4GB cards.

Similarly, from a power perspective the R9 380X’s typical board power remains unchanged at 190W. In practice it will be slightly higher thanks to the enabled CUs, but otherwise AMD hasn’t made any significant changes to shift it one way or another.

From a performance perspective then the R9 380X is not going to be a very exciting card. After 3 releases of the fully enabled Tahiti GPU – Radeon 7970, 7970 GHz Edition, and R9 280X – the architectural and clockspeed similarities of R9 380X mean that it’s essentially a fourth revision of this product. Which is to say that you’re looking at performance a percent or two better than the 7970, well-tread territory at this point.

The R9 380X’s principle reason to exist at this point is to allow AMD to refresh their lineup by tapping the rest of Tonga’s GPU performance, both to have something new to close out the rest of the year and to give them a card that can sit solidly between NVIDIA’s GeForce GTX 970 and GTX 960. That AMD is launching it now is somewhat arbitrary – we haven’t seen anything new in the $200 to $500 range since the GTX 960 launched in January and AMD could have launched it at any time since – and along those lines AMD tells us that they haven’t seen a need to launch this part until now. With the R9 380 otherwise shoring up the $199 price point until more recently, there’s always a trade-off to be had with having better positioning than the competition versus having too many products in your line (with the 300 + Fury series the tally is now 9 cards).

In AMD’s new lineup the R9 380X will slot in between AMD’s more expensive R9 390 and the cheaper R9 380. AMD is promoting this card as an entry-level card for 2560×1440 gaming, though with the more strenuous games released in the last 6 months that is going to require some quality compromises to achieve. As it stands I’d consider the 390 more of a 1440p card, while the R9 380X is better positioned as AMD’s strongest 1080p card; only in the most demanding games should the R9 380X face any real challenge.

As far as performance goes then, the R9 380X is about 10% faster than the 2GB R9 380 at 1080p, with the card taking a much more significant advantage in games where 2GB cards are memory bottlenecked. Otherwise the performance is almost exactly on-par with the 7970 and its variants, while the more powerful R9 390 has a sizable 43% performance advantage thanks to its greater CU count, memory bandwidth, and ROPs. This makes the R9 390 a bit of a spoiler on value, though its $290+ price tag ultimately puts it in its own class. Or to throw in a quick generational comparsion to AMD’s original $250 GCN card, Radoen HD 7850, you’re looking at a 75% increase in performance at this price bracket over 3 years.

Today’s launch of the R9 380X is a hard launch, with multiple board partners launching cards today. For most of the partners they will be reusing their R9 380 designs, which is fitting given the similarities between the two cards. Expect to see a significant number of factory overclocked cards, as Tonga has some headroom for the partners to play with. OC cards will start at $239 – a $10 premium – while the card we’ve been sampled from AMD, ASUS’s STRIX R9 380X OC, will retail for $259.

As for the competition, as I previously mentioned AMD will be slotting in between the GeForce GTX 970 and GTX 960. The former is going to be quite a bit faster but also quite a bit more expensive, while the R9 380X will handily best the 2GB GTX 960, albeit with a price premium of its own. At this point it’s safe to say that AMD holds a distinct edge on performance for the price, as they often do, though as has been the case all this generation they aren’t going to match NVIDIA’s power efficiency.

Finally, on a housekeeping note we’ll be back on Monday with a review of the ASUS STRIX R9 380X alongside a look at performance at reference clocks. We’ve only had the card and AMD’s launch drivers since the beginning of this week and there is still some work to be done before we can publish our review, so stay tuned.

ATI Gives Chipsets a Try – Introducing the Radeon IGP

If you look back at our coverage of Computex 2001 as well as all of the reports from other publications you’ll notice one common theme: nForce. While the chipset was only used by 5 motherboard manufacturers, it quickly became one of the biggest stories of the show. But it took months before we even got a chance to look at a board that was performing well.
Everyone expected the platform to blow everything else out of the water and to be priced competitively with competing solutions from ALi, SiS and VIA. Technology that was borrowed from the Xbox such as real-time Dolby Digital Encoding and isochronous hyper-transport links were going to advance the chipset industry by leaps and bounds previously unheard of.

When the nForce launched, it made a splash felt by very few. The chipset itself is one of the highest performing solutions for the Athlon platform but it is also the most expensive. We have heard quotes from manufacturers saying that for every 200 KT266A based motherboards they ship, they sell only a single nForce. With those levels of sales, any significant market penetration was out of reach. It’s easy to sit back and criticize the nForce launch but that does no one any good, instead it’s more useful to look at a different approach and see if it will work any better.

Rewinding back about 2 years ago, ATI announced their first PC core logic solution – the S1-370 TL. This Pentium III chipset featured an integrated GPU manufactured by ArtX whom ATI acquired earlier in that year. Even more interesting was the fact that the S1-370 TL features a 128-bit memory bus much like today’s nForce 420-D. Granted that back then the memory type of choice was conventional PC100 or PC133 SDRAM, offering 2.1GB/s of memory bandwidth at the beginning of 2000 was a big deal.

As you can probably guess, the S1-370 TL never took off and ATI’s first entry into the desktop chipset market has since been forgotten. We’ve known for some time that ATI would produce an nForce-like solution for the PC and we’ve even seen demonstrations of it behind closed doors, today ATI is publicly announcing their approach to PC core logic design with the Radeon IGP Integrated Graphics Chipset family.

VIA Introduces Quad Band Memory – 2X DDR at 1X Prices

When RDRAM made its debut on the desktop one of the biggest selling points of the Rambus technology was its very low pincount; at 16-bits per channel compared to the current 64-bit wide DDR buses, it’s easy to see where the lower pin count comes from. As a serial interface (much like Serial ATA), lower pin count and the ability to run at very high speeds are major benefits of the technology; not to mention that it is much easier on motherboard manufacturers to design solutions with lower pin count memory interfaces.

Very few can argue that RDRAM did exhibit technical superiority to DDR; however there are issues such as cost, risk and availability of modules that kept the technology out of the mainstream market. Because of this, and the inability of DDR SDRAM to provide as much bandwidth as is necessary for CPUs like the Pentium 4, chipset manufacturers have had to start working on dual channel DDR solutions.

NVIDIA was the first with their 128-bit DDR interface on the original nForce later followed by Intel with their E7500 chipset. Both SiS and VIA will be joining the ranks of dual channel DDR chipset manufacturers but make no mistake, going to dual channel DDR (128-bit parallel memory interface) does not make the life of the motherboard manufacturer any easier.

It isn’t impossible to implement a 128-bit memory interface, and you can actually do so on a 4-layer motherboard however that still doesn’t mean that motherboard manufacturers wouldn’t like to see a simpler solution. DDR-II may end up being that simpler solution, as a single 64-bit wide channel should be able to offer more bandwidth than present day dual channel DDR solutions; the downside is that DDR-II won’t be hitting the mainstream market until 2004.

Intel’s Prescott CPU will have a 667MHz FSB, demanding a memory interface capable of providing 5.3GB/s of memory bandwidth. While RDRAM definitely fits the bill, the market currently wants a DDR based solution to do the same and unfortunately it would take a dual channel DDR333 solution to provide that sort of bandwidth. It will be done as Springdale (Prescott’s intro chipset) will be a dual channel DDR333 platform, but again, the motherboard manufacturers want something simpler.

Being very in touch with the motherboard manufacturers and desperately trying to drum up interest in their Pentium 4 chipsets, VIA announced an alternative memory architecture based on currently available DDR technology: Quad Band Memory (QBM).

Budget CPU Shootout: Clash of the ‘rons

With new technology constantly being developed and released into the high end market, it is sometimes easy to overlook the slightly less glamorous world of budget microprocessors. It’s been a while since we’ve taken a look at what AMD and Intel have to offer in the area of low cost computing, and our curiosity recently got the better of us.

We were particularly curious about what you could get for $100, and it turns out that there are quite a few CPUs that you can get for less than the price of a motherboard. Currently, the budget market is made up of low end Athlon XP, Celeron, and Duron processors. There aren’t any Pentium 4 processors that come in under our $100 price point, but we’ve included the Pentium 4 1.8A (Northwood) as a reference point for the Celeron processors.

Performance is always being pushed in the high end market, but it is arguably even more important in the low end systems. If we are trying to save money on a computer system, we want our dollar to go as far as possible, so price/performance is the most important factor when determining components to fill a budget box. Just because we want to save money doesn’t mean we want to suffer a huge performance loss. With the price of PCs that perform well dropping all the time, it becomes easier for those who haven’t yet entered the digital realm to join the party. Of course, the last thing someone wants when they first start up their new computer is to be frustrated by lackluster performance. Hopefully this article will serve to help people make the best possible decision when it comes to budget computing.

These Sub-$100 CPUs serve as decent upgrades for aging systems (e.g. the P3-800 that is barely chugging along) when combined with a new motherboard, but they are also the heart and soul of many of today’s sub-$1000 PCs that you’d find in the retail market. Walk into any Best Buy or CompUSA and you’ll find tons of PCs selling from $400 – $600. The OEMs making these systems are cutting corners in every way possible, so you had better believe that one of these CPUs we’re comparing today will be under the hood. Retail customers should pay close attention to the results of this roundup — they may be even more shocking than expected.

When looking to get the absolute maximum performance out of every dollar spent, overclocking should be considered. We are hoping to address the overclockability of these budget processors in an upcoming article, but for now, we will only be looking at stock speeds.

Before we get to the tests, let’s take a look at the processors.