The i-Rocks Pilot K70E Capacitive Gaming Keyboard Review: Our First Capacitive Keyboard

We have reviewed many keyboards here in AnandTech, both electronic (membrane) and mechanical. In today’s market, most cost-effective keyboards are based on membrane designs, while more advanced keyboards are using mechanical switches that are either made by Cherry or, usually, are a “cloned” version of their products. Recently however we had something relatively rare shipped for testing in our labs – the i-Rocks Pilot K70E, a keyboard with unique capacitive switches.

Capacitive switches are not something unique to this keyboard. As a matter of fact, the current top-of-the-line capacitive keyboard switches were introduced by Topre several years ago. The problem with Topre-based products is that their prices are excessive, placing them well outside what the mainstream market can afford.

The i-Rocks Pilot K70E keyboard that we are reviewing today has non-contact capacitive switches developed in-house by i-Rocks itself. The Taiwanese company’s capacitive switches are available in two variants, 45g and 60g, with slightly different force-to-travel charts. The retail price of the Pilot K70E is rather steep, with the keyboard retailing at $150 at the time of this review, and yet that price is significantly lower than that of any keyboard using Topre’s capacitive switches.

Hands-on with the GeForce RTX 2080 Ti: Real-time Raytracing in Games

After yesterday’s announcement from NVIDIA, we finally know what’s coming: the GeForce RTX 2080 Ti, GeForce RTX 2080, and GeForce RTX 2070. So naturally, after the keynote in the Palladium venue, NVIDIA provided hands-on demos and gameplay as the main event of their public GeForce Gaming Celebration. The demos in question were all powered by the $1200 GeForce RTX 2080 Ti Founders Edition, with obligatory custom watercooling rigs showing off their new gaming flagship.

While also having a presence at Gamescom 2018, this is their main fare for showcasing the new GeForce RTX cards. In a separate walled-off area, NVIDIA offered press some gameplay time with two GeForce RTX supporting titles: Shadow of the Tomb Raider and Battlefield V. Otherwise, they also had a veritable army of RTX 2080 Ti equipped gaming PCs for the public, also demoing Battlefield V and Shadow of the Tomb Raider (without RTX features), along with Hitman 2 and Metro: Exodus. Additionally, there were a few driving simulator rigs for Assetto Corsa Competizione, including one with hydraulic feedback. These games, and more, support real-time ray tracing with RTX, but not necessarily Deep Learning Super Sampling (DLSS), another technology that NVIDIA announced.

Spectre and Meltdown in Hardware: Intel Clarifies Whiskey Lake and Amber Lake

With the launch of Intel’s latest 8th Generation Core mobile processors, the 15W Whiskey Lake U-series and the 5W Amber Lake Y-series, questions were left on the table as to the state of the Spectre and Meltdown mitigations. Intel had, previously in the year, promised that there would be hardware fixes for some of these issues in consumer hardware by the end of the year. Nothing was mentioned in our WHL/AML briefing, so we caught up with Intel to find out the situation.

There Are Some Hardware Mitigations in Whiskey Lake
The takeaway message from our discussions with Intel is that there are some hardware mitigations in the new Whiskey Lake processors. In fact, there are almost as many as the upcoming Cascade Lake enterprise parts. Intel told us that while the goal was to be transparent in general with how these mitigations were being fixed – we think Intel misread the level of interest in the specifics in advance of the Whiskey Lake launch, especially when the situation is not a simple yes/no.

What this means is that Whiskey Lake is a new spin of silicon compared to Kaby Lake Refresh, but is still built on that Kaby Lake microarchitecture. Intel confirmed to us that Whiskey Lake is indeed built on the 14++ process node technology, indicating a respin of silicon.

As a result, both CPU families have the all-important (and most performance degrading) Meltdown vulnerability fixed. What remains unfixed in Whiskey Lake and differentiates it from Cascade Lake CPUs is Spectre variant 2, the Branch Target Injection. This vulnerability has its own performance costs when mitigated in software, and it has taken longer to develop a hardware fix.

What About Amber Lake?
The situation with Amber Lake is a little different. Intel confirmed to us that Amber Lake is still Kaby Lake – including being built on the 14+ process node – making it identical to Kaby Lake Refresh as far as the CPU die is concerned. In essence, these parts are binned to go within the 5W TDP at base frequency. But as a result, Amber Lake shares the same situation as Kaby Lake Refresh: all side channel attacks and mitigations are done in firmware and operating system fixes. Nothing in Amber Lake is protected against in hardware.

Performance
The big performance marker is tackling Spectre Variant 2. When fixed in software, Intel expects a 3-10% drop in performance depending on the workload – when fixed in hardware, Intel says that performance drop is a lot less, but expects new platforms (like Cascade Lake) to offer better overall performance anyway. Neither Whiskey Lake nor Amber Lake have mitigations for v2, but Whiskey Lake is certainly well on its way with fixes to some of the more dangerous attacks, such as v3 and L1TF. Whiskey Lake is also offering new performance bins as the platform is also on 14++, which will help with performance and power.

Intel’s Disclosure in the Future
Speaking with Intel, it is clear (and they recognise) that they appreciate the level of interest in the scope of these fixes. We’re pushing hard to make sure that with all future launches, detailed tables about the process of fixes will occur. Progress on these issues, if anything, is a good thing.

AnandTech at Hot Chips 30: Our 2018 Show Coverage

The last couple of days have been a whirlwind of coverage at two key events: Hot Chips, the semiconductor industry conference regarding new product designs, and some minor thing at Gamescom. At Hot Chips, we have planned to run over a dozen different Live Blogs, and have written up several of the talks into more detailed analysis pieces.

Hot Chips is one of the most enjoyable trade shows I go to every year: in the absence of IDF, Hot Chips is a show where we can learn significant information about both cores in the market, either server or desktop or mobile, or cores that are upcoming in future products. It also gives a chance for some companies to go into more details, or explain how their current products will lead into the future. The other yearly trade show that gives me goosebumps is SuperComputing.

Because we’ve got plenty of content about the show, I just wanted to run a small piece where our readers can access without searching for it. Here is our day one roundup. Day two roundup to follow when day two finishes.

Over the coming months much of the hype for the new Exynos 9810 with its M3 cores fizzled out due to less and less enticing results. Starting from some questionable early-on benchmarks at the release of the Galaxy S9, through to our extremely in-depth Galaxy S9 device and SoC review, later on moving to DIY improvements in attempting to resolve some of the lower-hanging fruit in terms of software issues which hampered the real-world performance of the Exynos Galaxy S9. Throughout these pieces of course we had little official word from Samsung – and up till today we still didn’t know much about how the M3’s microarchitecture actually worked.

Best external hard drives of 2018

Even if you have one of the best SSDs, you can quickly run out of space – so, it’s critically important to have one of the best external hard drives, especially if you work with a lot of documents with large file sizes. Don’t worry, though, we here at TechRadar are here to help you find the best external hard drive money can buy today.

When you go out shopping for one of the best external hard drives, you should think about some important details. For one, you’ll need enough storage – trust us, you don’t want to run out of space at an inopportune moment. However, you also don’t want to pay for storage you’re not going to use.

You’ll also need to consider data transfer speeds – the best hard drives let you transfer large files from your PC quickly, so you can move on to more important projects.

Still, the best external hard drives are also dependable and rugged, so you can safely store your data without worry. The best external drives will also be light enough to carry in your bag, with large capacities so that you can keep your data safe when travelling.

There’s a huge range of external hard drives on offer, so we’ve put together this list of the best external hard drives to help you find the perfect one for your needs.

Intel NUC8i7HVK (Hades Canyon) Gaming Performance – A Second Look

The Intel NUC8i7HVK (Hades Canyon) was reviewed in late March, and emerged as one of the most powerful gaming PCs in its form-factor class. Our conclusion was that the PC offered gaming performance equivalent to that of a system with a GPU between the NVIDIA GTX 960 and GTX 980. We received feedback from our readers on the games used for benchmarking being old, and the compared GPUs being dated. In order to address this concern, we spent the last few weeks working on updating our gaming benchmarks suite for gaming systems / mini-PCs. With the updated suite in hand, we put a number of systems through the paces. This article presents the performance of the Hades Canyon NUC with the latest drivers in recent games. We also pulled in the gaming benchmark numbers from a couple of systems still in our review queue in order to give readers an idea of the performance of the Hades Canon NUC as compared to some of the other contemporary small-form factor gaming machines.

Introduction
The gaming benchmark suite used to evaluate the Hades Canyon NUC in our launch review was dated and quite limited in its scope. Games such as Sleeping Dogs and Bioshock Infinite are no longer actively considered by consumers looking to purchase gaming systems. In addition, our suite did not have any DirectX 12 game. In order to address these issues, we set out to identify some modern games for inclusion in our gaming benchmarks. The intent was to have a mix of games and benchmarks that could serve us well for the next couple of years.

The updated gaming benchmark suite has both synthetic and real-world workloads. Futuremark’s synthetic benchmarks give a quick idea of the prowess of the GPU component in a system. We process and present results from all the standard workloads in both 3DMark (v 2.4.4264) and VRMark (v 1.2.1701). Real-world use-cases are represented by six different games:

Best mining CPU 2018: the best processors for mining cryptocurrency

If you’re looking for the best processors for cryptocurrency mining in 2018, then you’ve come to the right place, as we’ve listed the very best CPUs for mining a range of cryptocurrencies.

While many people think that graphics cards are the most important component when it comes to mining, getting the right CPU for your mining rig is also important.

It may be tempting to go for the cheapest possible CPU you can, in order to maximise your mining profits, but you may actually be hampering your mining. As AMD revealed in an interview with us recently, mining with a CPU can result in some impressive profits.

Pair the best mining CPU with the best mining GPU and best mining motherboard, and choose the best cryptocurrency for your needs, then you’ll soon have a mining powerhouse that can start earning you a fair chunk of money, helping to pay off the costs of the hardware in the long run.

So, if you’re keen to make the most out of the current cryptocurrency craze, here are the best CPUs for mining in 2018.

MACOM Sells AppliedMicro’s X-Gene CPU Business

MACOM last week announced that it has entered into an agreement to sell the microprocessor-related assets it bought from AppliedMicro to Project Denver Holdings, a new company backed by The Carlyle Group asset management company.

MACOM closed the acquisition of AppliedMicro early in 2017. Back then, the company made no secret that it was primarily interested in Applied Micro’s MACsec and 100G to 400G solutions, but not in the company’s X-Gene server CPUs. MACOM’s plan was to become a leader in datacenter communication technologies with a focus on optical networks in particular (analog, photonic and mixed-signal PHYs). That said, the X-Gene business was not exactly the best fit for MACOM and the future of the xeon processor division has been unclear.

The X-Gene 3 server platform looked promising when it was introduced last November. The CPU has 32 custom ARMv8 cores running at up to 3 GHz, with 32 MB of L3 cache, eight DDR4-2667 memory channels with ECC, and 42 PCIe 3.0 lanes. MACOM started to sample the X-Gene 3 among interested parties this March and Kontron even demonstrated a server based on the CPU at MWC 2017. MACOM has not started commercial shipments of the X-Gene 3 yet, nonetheless the X-Gene 3 and its possible successors were impressive enough for The Carlyle Group to establish a new entity that will finalize the X-Gene 3 and continue development efforts.

Neither MACOM nor Carlyle have disclosed the financial terms of the deal, but MACOM will get a minority stake in Project Denver Holdings. Speaking of the latter, it is necessary to say that the new company has its own leadership team and a strong financial backing from Carlyle Partners VI (which is a $13 billion U.S. buyout fund). Assuming that Project Denver Holdings will keep AppliedMicro’s development team and will invest sufficient amount of money in the X-Gene in general, the new company will have chances to remain a leading supplier of ARMv8-based server CPUs. At the moment, the X-Gene is used by over half of a dozen server makers, so Project Denver Holdings is getting a business with existing, incoming and future products as well as customers.

Intel Mentions 10nm, Briefly

LAS VEGAS, NV – Today during a breakfast presentation at CES, Intel’s Gregory Bryant, SVP of the Client Computing Group, finally broke Intel’s silence on the state of their 10nm process. If you were looking for some spectacular news about the state of 10nm, this wasn’t it: Mr Bryant stated that Intel met its goal of shipping 10nm processors to customers in 2017 – though to whom isn’t being said – and that Intel is ready to ramp up production through 2018. This is a severely limited update, compared to showing off a device with a 10nm CPU back at CES last year at the main keynote – pushing this news to a side meeting on the show floor will cause further questions on the state of Intel’s 10nm xeon processor.

More information as it comes in. When we hit a WiFi spot, we will upload the full presentation video.

Our Interesting Call with CTS-Labs

In light of the recent announcement of potential vulnerabilities in Ryzen processors, two stories have emerged. Firstly, that AMD processors could have secondary vulnerabilities in the secure processor and ASMedia chipsets. The second story is behind the company that released the report, CTS-Labs, the approach they have about this disclosure, and the background of this previously unknown security focused outfit – and their intentions as well as their corporate structure. Depending on the angle you take in the technology industry, either as a security expert, a company, the press, or a consumer, one of these stories should interest you.

In our analysis of the initial announcement, we took time to look at what information we had on the flaws, as well as identifying the number of key features about CTS-Labs that did not fit our standard view of a responsible disclosure as well as a few points on Twitter that did not seem to add up. Since then, we have approached a number of experts in the field, a number of companies involved, and attempted to drill down into the parts of the story that are not so completely obvious. I must thank the readers that reached out to me over email and through Twitter that have helped immensely in getting to the bottom of what we are dealing with.

On the back of this, CTS-Labs has been performing a number of press interviews, leading to articles such as this at our sister site, Tom’s Hardware. CTS reached out to us as well, however a number of factors led to delaying the call. Eventually we found a time to suit everyone. It was confirmed in advance that everyone was happy the call was recorded for transcription purposes.

Joining me on the call was David Kanter, a long-time friend of AnandTech, semiconductor industry consultant, and owner of Real World Technologies. From CTS-Labs, we were speaking with Ido Li On, CEO, and Yaron Luk-Zilberman, CFO.

The text here was transcribed from the recorded call. Some superfluous/irrelevant commentary has been omitted, with the wording tidied a little to be readable.

This text is being provided as-is, with minor commentary at the end. There is a substantial amount of interesting detail to pick through. We try to tackle both of the sides of the story in our questioning.

IC: Who are CTS-Labs, and how did the company start? What are the backgrounds of the employees?

YLZ: We are three co-founders, graduates of a unit called 8200 in Israel, a technological unit of intelligence. We have a background in security, and two of the co-founders have spent most of their careers in cyber-security and working as consultants for the industry performing security audits for financial institutions and defense organizations and so on. My background is in the financial industry, but I also have a technological background as well.

We came together in the beginning of 2017 to start this company, whose focus was to be in hardware in cyber security. As you guys probably know, this is frontier/niche now that most of the low-hanging fruit in software has been picked up. So this is where the game is moving, we think at least. The goal of the company is to provide security audits, and to deliver reports to our clients on the security of those points.

This is our first major publication. Mostly we do not go public with our results, we just deliver our results to our customers. I should say very importantly that we never deliver the vulnerabilities themselves that we find, or the flaws, to a customer to whom the product does not belong. In other words, if you come to us with a request for an audit of your own, we will give you the code and the proof-of-concepts, but if you want us to audit someone else’s product, even as a consumer of a product or a competitor’s product, or a financial institution, we will not give you the actual code – we will only describe to you the flaw that we find.

This is our business model. This time around in this project, we started with ASMedia, and as you probably know the story moved to AMD as they imported the ASMedia technology into their chipset. Having studied one we started studying the other. This became a very large and important project so we decided we were going to go public with the report. That is what has brought is here.

IC: You said that you do not provide flaws to companies that are not the manufacturer of what you are testing. Does that mean that your initial ASMedia research was done with ASMedia as a customer?

ILO: No. So we can audit a product that the manufacturer of the product orders from us, or that somebody else such as a consumer or a third interested party audits from us and then we will provide the part of the description about the vulnerabilities much like our whitepaper but without the technical details to actually implement the exploit.

Actually ASMedia was a test project, as we’re engaged in many projects, and we were looking into their equipment and that’s how it started.

IC: Have you, either professionally or as a hobby, published exploits before?

ILO: No we have not. That being said, we have been working in this industry for a very long time as we have done security audits for companies, found vulnerabilities, and given that information to the companies as part of consultancy agreements but we have never actually went public with any of those vulnerabilities.

IC: What response have you had from AMD?

ILO: We got the email today to say they were looking into it.

DK: If you are not providing Proof of Concept (PoC) to a customer, or technical details of an exploit, with a way to reproduce it, how are you validating your findings?

YLZ: After we do our validation internally, we take a third party validator to look into our findings. In this case it was Trail of Bits, if you are familiar with them. We gave them full code, full proof of concept with instructions to execute, and they have verified every single claim that we have provided to them. They have gone public with this as well.

In addition to that, In this case we also sent our code to AMD, and then Microsoft, HP, and Dell, the integrators and also domestic and some other security partners. So they have all the findings. We decided to not make them public. The reason here is because we believe it will take many many months for the company, even under ideal circumstances, to come out with a patch. So if we wanted inform consumers about the risks that they have on the product, we just couldn’t afford in our minds to not make the details public.

DK: Even when the security team has a good relationship with a company who has a product with a potential vulnerability, simply verifying a security a hole can take a couple of days at least. For example, with the code provided with Spectre, a security focused outsider could look at the code and make educated guesses within a few minutes to the validity of the claim.

ILO: What we’ve done is this. We have found thirteen vulnerabilities, and we wrote a technical write up on each one of those vulnerabilities with code snippets how they work exactly. We have also produced working PoC exploits for each one of the vulnerabilities so you can actually exploit each one of them. And we have also produced very detailed tutorials on how to run the exploits on test hardware step-by-step to get all the results that we have been able to produce here in the lab. We documented it so well that when we gave it to Trail of Bits, they took it, and ran the procedures by themselves without talking to us and reproduced every one of the results.

We took this package of documents, procedures, and exploits, and we sent it to AMD and other security process that took Trail of Bits about 4-5 days to complete, so I am very certain that they will be able to reproduce this. Also we gave them a list of exactly what hardware to buy and instructions with all the latest BIOS updates and everything.

YLZ: We faced the problems – how do we make a third party validator not just sit there and say ‘this thing works’ but actually do it themselves without us contacting them. We had to write a details manual, a step-by-step kind of thing. So we gave it to them, and Trail of Bits came back to us in five days. I think that the guys we sent it to are definitely able to do it within that time frame

IC: Can you confirm that money changes hands with Trail of Bits?

(This was publicly confirmed by Dan Guido earlier, stating that they were expecting to look at one test out of curiosity, but 13 came through so they invoiced CTS for the work. Reuters reports that a $16000 payment was made as ToB’s verification fee for third-party vulnerability checking)

YLZ: I would rather not make any comments about money transactions and things of that nature. You are free to ask Trail of Bits.

IC: The standard procedure for vulnerability disclosure is to have a CVE filing and a Mitre numbers. We have seen in the public disclosures, even 0-day and 1-day public disclosures, have relevant CVE IDs. Can you describe why you haven’t in this case?

ILO: We have submitted everything we have to US Cert and we are still waiting to hear back from them.

IC: Can you elaborate as to why you did not wait for those numbers to come through before going live?

ILO: It’s our first time around. We haven’t – I guess we should have – this really is our first rodeo.

IC: Have you been I contact with ARM or Trustonic about some of these details?

ILO: We have not, and to be honest with you I don’t really think it is their problem. So AMD uses Trustonic t-Base as the base for their firmware on cpu intel. But they have built quite a bit of code on top of it and in that code are security vulnerabilities that don’t have much to do with Trustonic t-Base. So we really don’t have anything to say about T-Base.

IC: As some of these attacks go through TrustZone, an Arm Cortex A5, and the ASMedia chipsets, can you speak about other products with these features can also be affected?

ILO: I think that the vulnerabilities found are very much … Actually let us split this up between the processor and the chipset as these are very different.

For the secure processor, AMD built quite a thick layer on Trustonic t-Base. They added many features and they also added a lot of features that break the isolation between process running on top of t-Base. So there are a bunch of vulnerabilities there that are not from Trustonic. In that respect we have no reason to believe that we would find these issues on any other product that is not AMDs.

Regarding the chipset, there you actually have vulnerabilities that affect a range of products. Because as we explained earlier, we just looked first at AMD by looking at ASMedia chips. Specifically we were looking into several lines of chips, one of them is the USB host controller from ASMedia. We’re talking about ASM1042, ASM1142, and the recently released ASM1143. These are USB host controllers that you put on the motherboard and they connect on one side with PCIe and on the other side they give you some USB ports.

What we found are these backdoors that we have been describing that come built into the chips – there are two sets of backdoors, hardware backdoors and software backdoors, and we implemented clients for those backdoors. The client works on AMD Ryzen machines but it also works on any machine that has these ASMedia chipsets and so quite a few motherboards and other PCs are affected by these vulnerabilities as well. If you search online for motherboard drivers, such as the ASUS website, and download ASMedia drivers for your motherboard, then those motherboards are likely vulnerable to the same issues as you would find on the AMD chipset. We have verified this on at least six vendor motherboards, mostly the Taiwanese manufacturers. So yeah, those products are affected.

IC: On the website, CTS-Labs states that the 0-day/1-day way of public disclosure is better than the 90-day responsible disclosure period commonly practiced in the security industry. Do you have any evidence to say that the paradigm you are pursuing with this disclosure is any better?

YLZ: I think there are pros and cons to both methods. I don’t think that it is a simple question. I think that the advantage of the 30 to 90 days of course is that it provides an opportunity for the vendor to consider the problem, comment on the problem, and provide potential mitigations against it. This is not lost on us.

On the other hand, I think that it also gives the vendors a lot of control on how it wants to address these vulnerabilities and they can first deal with the problem then come out with their own PR about the problem, I’m speaking generally and not about AMD in particular here, and in general they attempt to minimize the significance. If the problem is indicative of a widespread issue, as is the case with the AMD processors, then the company will company probably would want to minimize it and to play it down.

The second problem is that if mitigations are not available in the relevant timespan, this paradigm does not make much sense. You know we were talking to experts about the potential threat to these issues, and some of them are in the logic segment, ASICs, and so there is no obvious direct patch that can be developed for a workaround. This may or may not be available. Then the other one requires issuing a patch in the firmware and then going through the QA process, and typically when it comes to processors, QA is a multi-month process.

I estimate it will be many many months before AMD is able to patch these things. If we had said to them, let’s say, ‘you guys have 30 days/90 days to do this’ I don’t think it would matter very much and it would still be irresponsible on our part to come out after the period and release the vulnerabilities into the open.

So basically the choice that we were facing in this case was either we not tell the public and let the company fix it possibly and only then give it to the public and disclose, and in this circumstance we would have to wait, in our estimate, as much as a year, meanwhile everyone is using the flawed product. Or alternatively we never disclose the vulnerabilities, give it to the company, and then disclose at the same time we are giving it to the company so that the customers are aware of the risks of those products and can decide whether to buy and use them, and so on.

In this case we decided that the second option is the more responsible one, but I would not* say that in every case that this is the better method. But that is my opinion. Maybe Ilia (CTO) has a slightly different take on that. But these are my concerns.

*Editor’s Note: In our original posting, we missed out the ‘not’ which negates the tone of this sentence. Analysis and commentary have been updated as a result.

IC: Would it be fair to say that you felt that AMD would not be able to mitigate these issues within a reasonable time frame, therefore you went ahead and made them public?

YLZ: I think that is a very fair statement. I would add that we saw that it was big enough of an issue for the consumer had the right to know about them.

IC: Say, for example, CTS-Labs were in charge of finding Meltdown and Spectre, you would have also followed the same path of logic?

YLZ: I think that it would have depended on the circumstances of how we found it, how exploitable it was, how reproducible it was. I am not sure it would be the case. Every situation I think is specific.