Best external hard drives of 2018

Even if you have one of the best SSDs, you can quickly run out of space – so, it’s critically important to have one of the best external hard drives, especially if you work with a lot of documents with large file sizes. Don’t worry, though, we here at TechRadar are here to help you find the best external hard drive money can buy today.

When you go out shopping for one of the best external hard drives, you should think about some important details. For one, you’ll need enough storage – trust us, you don’t want to run out of space at an inopportune moment. However, you also don’t want to pay for storage you’re not going to use.

You’ll also need to consider data transfer speeds – the best hard drives let you transfer large files from your PC quickly, so you can move on to more important projects.

Still, the best external hard drives are also dependable and rugged, so you can safely store your data without worry. The best external drives will also be light enough to carry in your bag, with large capacities so that you can keep your data safe when travelling.

There’s a huge range of external hard drives on offer, so we’ve put together this list of the best external hard drives to help you find the perfect one for your needs.

Intel NUC8i7HVK (Hades Canyon) Gaming Performance – A Second Look

The Intel NUC8i7HVK (Hades Canyon) was reviewed in late March, and emerged as one of the most powerful gaming PCs in its form-factor class. Our conclusion was that the PC offered gaming performance equivalent to that of a system with a GPU between the NVIDIA GTX 960 and GTX 980. We received feedback from our readers on the games used for benchmarking being old, and the compared GPUs being dated. In order to address this concern, we spent the last few weeks working on updating our gaming benchmarks suite for gaming systems / mini-PCs. With the updated suite in hand, we put a number of systems through the paces. This article presents the performance of the Hades Canyon NUC with the latest drivers in recent games. We also pulled in the gaming benchmark numbers from a couple of systems still in our review queue in order to give readers an idea of the performance of the Hades Canon NUC as compared to some of the other contemporary small-form factor gaming machines.

The gaming benchmark suite used to evaluate the Hades Canyon NUC in our launch review was dated and quite limited in its scope. Games such as Sleeping Dogs and Bioshock Infinite are no longer actively considered by consumers looking to purchase gaming systems. In addition, our suite did not have any DirectX 12 game. In order to address these issues, we set out to identify some modern games for inclusion in our gaming benchmarks. The intent was to have a mix of games and benchmarks that could serve us well for the next couple of years.

The updated gaming benchmark suite has both synthetic and real-world workloads. Futuremark’s synthetic benchmarks give a quick idea of the prowess of the GPU component in a system. We process and present results from all the standard workloads in both 3DMark (v 2.4.4264) and VRMark (v 1.2.1701). Real-world use-cases are represented by six different games:

Best mining CPU 2018: the best processors for mining cryptocurrency

If you’re looking for the best processors for cryptocurrency mining in 2018, then you’ve come to the right place, as we’ve listed the very best CPUs for mining a range of cryptocurrencies.

While many people think that graphics cards are the most important component when it comes to mining, getting the right CPU for your mining rig is also important.

It may be tempting to go for the cheapest possible CPU you can, in order to maximise your mining profits, but you may actually be hampering your mining. As AMD revealed in an interview with us recently, mining with a CPU can result in some impressive profits.

Pair the best mining CPU with the best mining GPU and best mining motherboard, and choose the best cryptocurrency for your needs, then you’ll soon have a mining powerhouse that can start earning you a fair chunk of money, helping to pay off the costs of the hardware in the long run.

So, if you’re keen to make the most out of the current cryptocurrency craze, here are the best CPUs for mining in 2018.

MACOM Sells AppliedMicro’s X-Gene CPU Business

MACOM last week announced that it has entered into an agreement to sell the microprocessor-related assets it bought from AppliedMicro to Project Denver Holdings, a new company backed by The Carlyle Group asset management company.

MACOM closed the acquisition of AppliedMicro early in 2017. Back then, the company made no secret that it was primarily interested in Applied Micro’s MACsec and 100G to 400G solutions, but not in the company’s X-Gene server CPUs. MACOM’s plan was to become a leader in datacenter communication technologies with a focus on optical networks in particular (analog, photonic and mixed-signal PHYs). That said, the X-Gene business was not exactly the best fit for MACOM and the future of the xeon processor division has been unclear.

The X-Gene 3 server platform looked promising when it was introduced last November. The CPU has 32 custom ARMv8 cores running at up to 3 GHz, with 32 MB of L3 cache, eight DDR4-2667 memory channels with ECC, and 42 PCIe 3.0 lanes. MACOM started to sample the X-Gene 3 among interested parties this March and Kontron even demonstrated a server based on the CPU at MWC 2017. MACOM has not started commercial shipments of the X-Gene 3 yet, nonetheless the X-Gene 3 and its possible successors were impressive enough for The Carlyle Group to establish a new entity that will finalize the X-Gene 3 and continue development efforts.

Neither MACOM nor Carlyle have disclosed the financial terms of the deal, but MACOM will get a minority stake in Project Denver Holdings. Speaking of the latter, it is necessary to say that the new company has its own leadership team and a strong financial backing from Carlyle Partners VI (which is a $13 billion U.S. buyout fund). Assuming that Project Denver Holdings will keep AppliedMicro’s development team and will invest sufficient amount of money in the X-Gene in general, the new company will have chances to remain a leading supplier of ARMv8-based server CPUs. At the moment, the X-Gene is used by over half of a dozen server makers, so Project Denver Holdings is getting a business with existing, incoming and future products as well as customers.

Intel Mentions 10nm, Briefly

LAS VEGAS, NV – Today during a breakfast presentation at CES, Intel’s Gregory Bryant, SVP of the Client Computing Group, finally broke Intel’s silence on the state of their 10nm process. If you were looking for some spectacular news about the state of 10nm, this wasn’t it: Mr Bryant stated that Intel met its goal of shipping 10nm processors to customers in 2017 – though to whom isn’t being said – and that Intel is ready to ramp up production through 2018. This is a severely limited update, compared to showing off a device with a 10nm CPU back at CES last year at the main keynote – pushing this news to a side meeting on the show floor will cause further questions on the state of Intel’s 10nm xeon processor.

More information as it comes in. When we hit a WiFi spot, we will upload the full presentation video.

Our Interesting Call with CTS-Labs

In light of the recent announcement of potential vulnerabilities in Ryzen processors, two stories have emerged. Firstly, that AMD processors could have secondary vulnerabilities in the secure processor and ASMedia chipsets. The second story is behind the company that released the report, CTS-Labs, the approach they have about this disclosure, and the background of this previously unknown security focused outfit – and their intentions as well as their corporate structure. Depending on the angle you take in the technology industry, either as a security expert, a company, the press, or a consumer, one of these stories should interest you.

In our analysis of the initial announcement, we took time to look at what information we had on the flaws, as well as identifying the number of key features about CTS-Labs that did not fit our standard view of a responsible disclosure as well as a few points on Twitter that did not seem to add up. Since then, we have approached a number of experts in the field, a number of companies involved, and attempted to drill down into the parts of the story that are not so completely obvious. I must thank the readers that reached out to me over email and through Twitter that have helped immensely in getting to the bottom of what we are dealing with.

On the back of this, CTS-Labs has been performing a number of press interviews, leading to articles such as this at our sister site, Tom’s Hardware. CTS reached out to us as well, however a number of factors led to delaying the call. Eventually we found a time to suit everyone. It was confirmed in advance that everyone was happy the call was recorded for transcription purposes.

Joining me on the call was David Kanter, a long-time friend of AnandTech, semiconductor industry consultant, and owner of Real World Technologies. From CTS-Labs, we were speaking with Ido Li On, CEO, and Yaron Luk-Zilberman, CFO.

The text here was transcribed from the recorded call. Some superfluous/irrelevant commentary has been omitted, with the wording tidied a little to be readable.

This text is being provided as-is, with minor commentary at the end. There is a substantial amount of interesting detail to pick through. We try to tackle both of the sides of the story in our questioning.

IC: Who are CTS-Labs, and how did the company start? What are the backgrounds of the employees?

YLZ: We are three co-founders, graduates of a unit called 8200 in Israel, a technological unit of intelligence. We have a background in security, and two of the co-founders have spent most of their careers in cyber-security and working as consultants for the industry performing security audits for financial institutions and defense organizations and so on. My background is in the financial industry, but I also have a technological background as well.

We came together in the beginning of 2017 to start this company, whose focus was to be in hardware in cyber security. As you guys probably know, this is frontier/niche now that most of the low-hanging fruit in software has been picked up. So this is where the game is moving, we think at least. The goal of the company is to provide security audits, and to deliver reports to our clients on the security of those points.

This is our first major publication. Mostly we do not go public with our results, we just deliver our results to our customers. I should say very importantly that we never deliver the vulnerabilities themselves that we find, or the flaws, to a customer to whom the product does not belong. In other words, if you come to us with a request for an audit of your own, we will give you the code and the proof-of-concepts, but if you want us to audit someone else’s product, even as a consumer of a product or a competitor’s product, or a financial institution, we will not give you the actual code – we will only describe to you the flaw that we find.

This is our business model. This time around in this project, we started with ASMedia, and as you probably know the story moved to AMD as they imported the ASMedia technology into their chipset. Having studied one we started studying the other. This became a very large and important project so we decided we were going to go public with the report. That is what has brought is here.

IC: You said that you do not provide flaws to companies that are not the manufacturer of what you are testing. Does that mean that your initial ASMedia research was done with ASMedia as a customer?

ILO: No. So we can audit a product that the manufacturer of the product orders from us, or that somebody else such as a consumer or a third interested party audits from us and then we will provide the part of the description about the vulnerabilities much like our whitepaper but without the technical details to actually implement the exploit.

Actually ASMedia was a test project, as we’re engaged in many projects, and we were looking into their equipment and that’s how it started.

IC: Have you, either professionally or as a hobby, published exploits before?

ILO: No we have not. That being said, we have been working in this industry for a very long time as we have done security audits for companies, found vulnerabilities, and given that information to the companies as part of consultancy agreements but we have never actually went public with any of those vulnerabilities.

IC: What response have you had from AMD?

ILO: We got the email today to say they were looking into it.

DK: If you are not providing Proof of Concept (PoC) to a customer, or technical details of an exploit, with a way to reproduce it, how are you validating your findings?

YLZ: After we do our validation internally, we take a third party validator to look into our findings. In this case it was Trail of Bits, if you are familiar with them. We gave them full code, full proof of concept with instructions to execute, and they have verified every single claim that we have provided to them. They have gone public with this as well.

In addition to that, In this case we also sent our code to AMD, and then Microsoft, HP, and Dell, the integrators and also domestic and some other security partners. So they have all the findings. We decided to not make them public. The reason here is because we believe it will take many many months for the company, even under ideal circumstances, to come out with a patch. So if we wanted inform consumers about the risks that they have on the product, we just couldn’t afford in our minds to not make the details public.

DK: Even when the security team has a good relationship with a company who has a product with a potential vulnerability, simply verifying a security a hole can take a couple of days at least. For example, with the code provided with Spectre, a security focused outsider could look at the code and make educated guesses within a few minutes to the validity of the claim.

ILO: What we’ve done is this. We have found thirteen vulnerabilities, and we wrote a technical write up on each one of those vulnerabilities with code snippets how they work exactly. We have also produced working PoC exploits for each one of the vulnerabilities so you can actually exploit each one of them. And we have also produced very detailed tutorials on how to run the exploits on test hardware step-by-step to get all the results that we have been able to produce here in the lab. We documented it so well that when we gave it to Trail of Bits, they took it, and ran the procedures by themselves without talking to us and reproduced every one of the results.

We took this package of documents, procedures, and exploits, and we sent it to AMD and other security process that took Trail of Bits about 4-5 days to complete, so I am very certain that they will be able to reproduce this. Also we gave them a list of exactly what hardware to buy and instructions with all the latest BIOS updates and everything.

YLZ: We faced the problems – how do we make a third party validator not just sit there and say ‘this thing works’ but actually do it themselves without us contacting them. We had to write a details manual, a step-by-step kind of thing. So we gave it to them, and Trail of Bits came back to us in five days. I think that the guys we sent it to are definitely able to do it within that time frame

IC: Can you confirm that money changes hands with Trail of Bits?

(This was publicly confirmed by Dan Guido earlier, stating that they were expecting to look at one test out of curiosity, but 13 came through so they invoiced CTS for the work. Reuters reports that a $16000 payment was made as ToB’s verification fee for third-party vulnerability checking)

YLZ: I would rather not make any comments about money transactions and things of that nature. You are free to ask Trail of Bits.

IC: The standard procedure for vulnerability disclosure is to have a CVE filing and a Mitre numbers. We have seen in the public disclosures, even 0-day and 1-day public disclosures, have relevant CVE IDs. Can you describe why you haven’t in this case?

ILO: We have submitted everything we have to US Cert and we are still waiting to hear back from them.

IC: Can you elaborate as to why you did not wait for those numbers to come through before going live?

ILO: It’s our first time around. We haven’t – I guess we should have – this really is our first rodeo.

IC: Have you been I contact with ARM or Trustonic about some of these details?

ILO: We have not, and to be honest with you I don’t really think it is their problem. So AMD uses Trustonic t-Base as the base for their firmware on cpu intel. But they have built quite a bit of code on top of it and in that code are security vulnerabilities that don’t have much to do with Trustonic t-Base. So we really don’t have anything to say about T-Base.

IC: As some of these attacks go through TrustZone, an Arm Cortex A5, and the ASMedia chipsets, can you speak about other products with these features can also be affected?

ILO: I think that the vulnerabilities found are very much … Actually let us split this up between the processor and the chipset as these are very different.

For the secure processor, AMD built quite a thick layer on Trustonic t-Base. They added many features and they also added a lot of features that break the isolation between process running on top of t-Base. So there are a bunch of vulnerabilities there that are not from Trustonic. In that respect we have no reason to believe that we would find these issues on any other product that is not AMDs.

Regarding the chipset, there you actually have vulnerabilities that affect a range of products. Because as we explained earlier, we just looked first at AMD by looking at ASMedia chips. Specifically we were looking into several lines of chips, one of them is the USB host controller from ASMedia. We’re talking about ASM1042, ASM1142, and the recently released ASM1143. These are USB host controllers that you put on the motherboard and they connect on one side with PCIe and on the other side they give you some USB ports.

What we found are these backdoors that we have been describing that come built into the chips – there are two sets of backdoors, hardware backdoors and software backdoors, and we implemented clients for those backdoors. The client works on AMD Ryzen machines but it also works on any machine that has these ASMedia chipsets and so quite a few motherboards and other PCs are affected by these vulnerabilities as well. If you search online for motherboard drivers, such as the ASUS website, and download ASMedia drivers for your motherboard, then those motherboards are likely vulnerable to the same issues as you would find on the AMD chipset. We have verified this on at least six vendor motherboards, mostly the Taiwanese manufacturers. So yeah, those products are affected.

IC: On the website, CTS-Labs states that the 0-day/1-day way of public disclosure is better than the 90-day responsible disclosure period commonly practiced in the security industry. Do you have any evidence to say that the paradigm you are pursuing with this disclosure is any better?

YLZ: I think there are pros and cons to both methods. I don’t think that it is a simple question. I think that the advantage of the 30 to 90 days of course is that it provides an opportunity for the vendor to consider the problem, comment on the problem, and provide potential mitigations against it. This is not lost on us.

On the other hand, I think that it also gives the vendors a lot of control on how it wants to address these vulnerabilities and they can first deal with the problem then come out with their own PR about the problem, I’m speaking generally and not about AMD in particular here, and in general they attempt to minimize the significance. If the problem is indicative of a widespread issue, as is the case with the AMD processors, then the company will company probably would want to minimize it and to play it down.

The second problem is that if mitigations are not available in the relevant timespan, this paradigm does not make much sense. You know we were talking to experts about the potential threat to these issues, and some of them are in the logic segment, ASICs, and so there is no obvious direct patch that can be developed for a workaround. This may or may not be available. Then the other one requires issuing a patch in the firmware and then going through the QA process, and typically when it comes to processors, QA is a multi-month process.

I estimate it will be many many months before AMD is able to patch these things. If we had said to them, let’s say, ‘you guys have 30 days/90 days to do this’ I don’t think it would matter very much and it would still be irresponsible on our part to come out after the period and release the vulnerabilities into the open.

So basically the choice that we were facing in this case was either we not tell the public and let the company fix it possibly and only then give it to the public and disclose, and in this circumstance we would have to wait, in our estimate, as much as a year, meanwhile everyone is using the flawed product. Or alternatively we never disclose the vulnerabilities, give it to the company, and then disclose at the same time we are giving it to the company so that the customers are aware of the risks of those products and can decide whether to buy and use them, and so on.

In this case we decided that the second option is the more responsible one, but I would not* say that in every case that this is the better method. But that is my opinion. Maybe Ilia (CTO) has a slightly different take on that. But these are my concerns.

*Editor’s Note: In our original posting, we missed out the ‘not’ which negates the tone of this sentence. Analysis and commentary have been updated as a result.

IC: Would it be fair to say that you felt that AMD would not be able to mitigate these issues within a reasonable time frame, therefore you went ahead and made them public?

YLZ: I think that is a very fair statement. I would add that we saw that it was big enough of an issue for the consumer had the right to know about them.

IC: Say, for example, CTS-Labs were in charge of finding Meltdown and Spectre, you would have also followed the same path of logic?

YLZ: I think that it would have depended on the circumstances of how we found it, how exploitable it was, how reproducible it was. I am not sure it would be the case. Every situation I think is specific.

HGST Deskstar NAS 4 TB Review

The traditional market for hard drives (PCs and notebooks) is facing a decline due to the host of advantages provided by SSDs. However, the explosion in the amount of digital content generated by households and businesses has resulted in the rapid growth of the SMB / SOHO / consumer NAS market. Hard drive vendors have jumped on to this opportunity by tweaking the firmware and manufacturing process of their drives to create lineups specifically suited for the NAS market.

We have already had comprehensive coverage of a number of 4 TB NAS drives and a few 6 TB ones. One of the drives that we couldn’t obtain in time for our initial 4 TB roundup was the HGST Deskstar NAS. After getting sampled last month, we put the 4 TB version of the HGST Deskstar NAS through our evaluation routine for NAS drives. While most of our samples are barebones, HGST sampled us their retail kit, which includes mounting screws and an installation guide.

The correct choice of hard drives for a NAS system is influenced by a number of factors. These include expected workloads, performance requirements and power consumption restrictions, amongst others. In this review, we will discuss some of these aspects while comparing the HGST Deskstar NAS against other drives targeting the NAS market. The list of drives that we will be looking at today is listed below.

e3 1200 v2
HGST Deskstar NAS (HDN724040ALE640)
WD Red Pro (WD4001FFSX-68JNUN0)
Seagate Enterprise Capacity 3.5″ HDD v4 (ST4000NM0024-1HT178)
WD Red (WD40EFRX-68WT0N0)
Seagate NAS HDD (ST4000VN000-1H4168)
WD Se (WD4000F9YZ-09N20L0)
Seagate Terascale (ST4000NC000-1FR168)
WD Re (WD4000FYYZ-01UL1B0)
Seagate Constellation ES.3 (ST4000NM0033-9ZM170)
Toshiba MG03ACA400
HGST Ultrastar 7K4000 SAS (HUS724040ALS640)
Prior to proceeding with the actual review, it must be made clear that the above drives do not target the same specific market. For example, the WD Red and Seagate NAS HDD are for 1- 8 bay NAS systems in the tower form factor. The WD Red Pro is meant for rackmount units up to 16 bays, but is not intended to be a replacement for drives such as the WD Re. Seagate Constellation ES.3, Seagate Enterprise Capacity v4 and the Toshiba MG03ACA400 which target enterprise applications requiring durability under heavy workloads. The WD Se and the Seagate Terascale target the capacity-sensitive cold storage / data center market.

The HGST Deskstar NAS is supposed to slot in-between the WD Red and the WD Red Pro. It doesn’t specify an upper limit on the number of bays, but mentions only desktop form factor systems. Like other NAS drives, it is rated for 24×7 operation and includes a rotational vibration sensor for increased reliability.

Testbed Setup and Testing Methodology
Our NAS drive evaluation methodology consists of putting the units to test under both DAS and NAS environments. We first start off with a feature set comparison of the various drives, followed by a look at the raw performance when connected directly to a SATA 6 Gbps port. In the same PC, we also evaluate the performance of the drive using some aspects of our direct attached storage (DAS) testing methodology. For evaluation in a NAS environment, we configure three drives of each model in a RAID-5 volume and process selected benchmarks from our standard NAS review methodology. Since our NAS drive testbed supports both SATA and SAS drives, but our DAS testbed doesn’t, only SATA drives are subject to the DAS benchmarks.

We used two testbeds in our evaluation, one for benchmarking the raw drive and DAS performance and the other for evaluating performance when placed in a NAS unit.

In the above testbed, the hot swap bays of the Corsair Air 540 have to be singled out for special mention.
They were quite helpful in getting the drives processed in a fast and efficient manner for benchmarking. For NAS evaluation, we used the QNAP TS-EC1279U-SAS-RP. This is very similar to the unit we reviewed last year, except that we have a slightly faster CPU, more RAM and support for both SATA and SAS drives.

The NAS setup itself was subjected to benchmarking using our standard NAS testbed.

Synology Launches ARM-based DS1515 and RS815 Value Series NAS Units

Synology introduced their x15+ series in the second half of 2014. The models were all based on the Intel Rangeley platform (x86). It is now time for a refresh of the Value Series using ARM-based SoCs – the x15 models. There are two units being introduced today, the 5-bay DS1515 in the tower form factor (MSRP: $650) and the 4-bay RS815 in a new short-depth rackmount form factor (MSRP: $600).

Based on the Annapurna Labs Alpine AL-314 quad-core Cortex-A15 SoC, the unit has four GbE LAN ports, two USB 3.0 and two eSATA ports. The eSATA ports can be used to connect up to two DX513 / DX213 expansion units. This can provide up to a maximum of 10 bays additional to the five on the main unit. The DS1515 comes with 2GB of RAM.

The unit comes with Synology’s widely respected DiskStation Manager DSM 5.2 OS supporting a wide variety of networking protocols, applications and add-on packages. The AL-314 SoC comes with hardware encryption engines and a dedicated floating point unit. The presence of four LAN ports help in setting up a high-performance high-availability cluster. Claimed throughput numbers indicate up to 403.7 MBps reads and 421.8 MBps writes.

The RS815 solution is internally the same as the RS814 introduced last year. Carrying the same Marvell ARMADA XP MV78230 that we reviewed a couple of years back in the LenovoEMC ix4-300d, the performance numbers come in at 216.7 MBps reads and 121.8 MBps writes. The eSATA port allows the connection of a RX415 expansion unit. This can provide up to a maximum of 4 bays additional to the four on the main unit.

The important update is the short-depth chassis design. The RS815 is only 29 cm deep, compared to the RS814’s 46 cm. This allows for denser deployments and compatibility with industrial server environments.

It is refreshing to see Synology continuing to invest in ARM-based models for the Value Series. While these are not true 64-bit solutions yet, they will ensure that Intel x86-based solutions are not the only game in town for high-performance NAS units. The presence of four GbE ports on the DS1515 brings a host of exciting use-cases to the table. The Alpine platform might just about cut the grade for 10G NAS units, but it should be an excellent choice for NAS units with GbE LAN ports. The RS815, on the other hand, takes a tried and tested platform and fits it in a new chassis to expand its application areas.

QNAP Begins to Ship AMD Ryzen-Based TS-x77 Series NAS: 6, 8, 12 Bays

QNAP on Wednesday said that it had begun to ship its NAS devices based on AMD’s Ryzen processors. The new TS-677, TS-877 and TS-1277 NAS feature six, eight, or twelve hard drive bays and support SSD caching to maximize performance. The company positions the new NAS for various demanding applications, including VDI, private cloud, virtualization, containerized applications and so on.

QNAP introduced its TS-x77-series NAS at Computex in mid-2017 and became the first supplier of such products to adopt the AMD Ryzen platform. The company explained that high core count, strong integer performance, versatile PCIe support, AES-NI support, competitive pricing, and other factors made CPU a good choice for NAS. Integrated capabilities of the Ryzen platform enabled QNAP to support two M.2-22110 PCIe 3.0 x4 slots for caching SSDs, two PCIe 3.0 x4 slots as well as a PCIe 3.0 x8 slot for 10GbE/40GbE NICs, PCIe NVMe SSD, graphics and other expansion cards in all three models of the TS-x77 series.

The top-of-the-range 12-bay QNAP TS-1277 NAS is based on AMD’s eight-core Ryzen 7 1700 CPUs and come with 64 GB of DDR4 memory in order to handle various applications. Meanwhile, mid-range and entry-level models featuring 12, 8 or 6 bays use the six-core AMD Ryzen 5-1600 and come with 16 GB or 8 GB of memory (see the table below for specifications of the U.S. versions, for others it makes sense to check out the original news story from Computex). All the QNAP TS-x77-series NAS support RAID 50/60 as well as Qtier 2.0 IO Aware features for SSD tiered storage.

The new NAS will run QNAP’s QTS 4.3 operating system and therefore will support the same capabilities as other NAS from the manufacturer. In addition, the QTS 4.3 supports various specially designed applications. The TS-x77 devices are also virtualization ready for VMware, Citrix, Microsoft Hyper-V and Windows Server 2012 R2 environments with the support of iSER (iSCSI Over RDMA). The powerful CPUs inside enable the NAS to host virtual machines and run various applications.

Originally, QNAP promised to ship the TS-x77 series NAS sometimes in Q3, but then delayed them to November, so expect the products to be available from retailers in the coming couple of weeks. Given the positioning, the new TS-677, TS-877 and TS-1277 NAS from QNAP are not going to be cheap. The manufacturer did not reaffirm pricing of the new units in its recent press release, but based on claims made earlier this year, the most affordable TS-677 (Ryzen 5 1600, 8 GB DDR4, no drives) will retail for $1699, whereas the high-end TS-1277 (Ryzen 7 1700, 64 GB, no drives) will cost $3599. Actual specs and MSRPs may vary by region and fully-populated NAS will naturally cost more.

VIA Apollo KX133 Athlon Chipset – Part 1

Be sure to read Part 2 of our KX133 Review for more information on the chipset’s performance.
The slow demise of the Socket-7 platform quite possibly summed up a period in time when the hardware enthusiast was given the most choices when putting together a system. At the peak of the platform’s existence, there were three major CPU manufacturers producing processors for Socket-7 motherboards, there were solutions available in both AT and ATX form factors, and from a chipset standpoint, the platform had three chipset solutions from Intel and another three from VIA.

That theme of variety from the old Socket-7 days has long since been abandoned; until well into the establishment of the Slot-1 platform, all chipsets manufactured were made by Intel. That same trend seemed to be mirrored with the introduction of AMD’s Athlon late last year. One of the worries for the success of the Athlon that we shared at AnandTech was platform chipset support. While AMD announced that both ALi and VIA would have solutions ready for the Athlon, as launch time approached, it quickly became obvious that neither the ALi or VIA solutions would be ready for the release of the Athlon.

So what chipset would the Athlon launch with? AMD had done all of their internal testing and tweaking using their own in-house developed chipset, internally known as the Irongate chipset but commonly known to us as the AMD 750 chipset. The AMD 750 boasted AGP 2X and PC100 SDRAM support courtesy of the AMD 751 North Bridge as well as Ultra ATA 66 courtesy of the AMD 756 South Bridge.

It wasn’t too long before Athlon based motherboards began shipping with hybrids of the AMD chipset and VIA’s upcoming solution. Motherboards like the ASUS K7M and FIC SD-11 featured AMD’s 751 North Bridge but VIA’s 686A South Bridge, in order to move away from using AMD as a chipset supplier.

Last November, we were told that VIA’s upcoming Athlon chipset, the Apollo KX133, was already complete and they were hard at work with motherboard manufacturers to make sure that the delicate implementation of the chipset was handled properly. This would help to eliminate any of the motherboard problems that the first wave of Athlon boards based on the AMD 750 chipset so regretfully boasted.

Finally, on January 10 of this year, VIA announced that they had begun volume shipping of the “first independently developed chipset to support the AMD Athlon processor” known to all of us as the KX133. The release of the KX133 puts VIA in the position of a great monopoly in the Athlon market since motherboard manufacturers will refrain from producing many (if any at all) AMD 750 based solutions and since ALi’s Athlon solution has yet to be seen other than behind a glass display case at last year’s Fall Comdex.

With the exception of a few Athlon motherboards that were being developed with the AMD 750 in mind, all Athlon motherboards that will be shipping from manufacturers that have yet to enter the Athlon motherboard market will be KX133 based solutions. It won’t be long before the AMD 750 disappears from the market and VIA assumes the role of exclusive Athlon chipset provider for the time being. Scary thought?

It shouldn’t be. VIA has never been known to abuse their power during the times when they have been given the upper hand in a market (i.e. Super7 market), the only question is, in spite of VIA’s history, can the KX133 step up to the plate and offer performance and compatibility (the latter being a weak point in VIA’s history) superior to that of the AMD 750?