Monday 30 October 2017

F1 | Dalla Svizzera: Ericsson e Leclerc piloti Sauber 2018

Secondo quanto riportato dal quotidiano elvetico Blick, la Sauber avrebbe scelto i propri piloti per la prossima stagione. L’accoppiata sarà...

Read the full article here by

Thursday 26 October 2017

French Duo Decodes Bitcoin QR From TV Broadcast

In an interview on French television, entrepreneur Roger Ver offered $1,000 worth of bitcoins (about 3 bitcoins at the time) to viewers. The only catch was the QR code was blurred out during most of the broadcast. Fortunately, two enterprising hackers took it upon themselves and noticed the code wasn't sufficiently blurred in parts of the video segment. The duo scanned and decoded the missing portions of the QR code, calculating 2.1 million possible combinations narrowing it down to two valid keys matching the bitcoin private keys. "The money is the cherry on the cake but the most exciting was when we found the private key and the algorithm told us this was the one," says Mr Sassano, adding that he has not yet withdrawn the money. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Tuesday 24 October 2017

Android-x86 7.1-rc2 Now Supports NVMe SSDs, Better QEMU VirGL

The Android-x86 project derived from Google's Android Open-Source Project code-base remains officially at Android 6.0, but there is an Android 7.1 "Nougat" build available for testing.

Quietly released earlier this month was a second release candidate based on Android 7.1. The Android-x86 7.1-RC2 release is based on upstream AOSP 7.1-RC2 / Nougat-MR2 along with some extra improvements for this x86-targeted build.

Further improvements with Android-x86 7.1-rc2 are SDCardFS support, SwiftShader as a new software renderer for systems lacking GPU acceleration, better QEMU VirGL support for 3D acceleration in guest VMs, support for NVMe solid-state storage devices, Mesa 17.1.10 support, improved native bridge support, and is updated to the Linux 4.9.54 kernel.

More details on this new Android-x86 7 "Nougat" test release via

. Sadly, no indications of an Android-x86 8 "Oreo" release coming any sooner with these x86-focused builds continuing to lag further behind against upstream AOSP.

Read the full article here by Phoronix

Monday 23 October 2017

Mercedes handles the competition because it knows how to handle data, too

reader comments 1

AUSTIN, Texas—History happened Sunday at the Circuit of the Americas. Formula 1 driver Lewis Hamilton won for the fifth time in six years at Austin, inching him closer to a fourth world championship this year. And on a macro scale, Hamilton’s victory sealed a fourth straight Formula One constructors’ championship for the Silver Arrows team at Mercedes. According to ESPN, that makes Mercedes the first team to win consecutive championships across a major regulation change.

How does a team achieve such sustained dominance—Mercedes has won a staggering 51 of 59 total races between 2014 and 2016—in an era where the sport has witnessed an infusion of more money, more engineering talent, and more of those aforementioned regulations? If you listen to members of the Mercedes-AMG Petronas Motorsport tech team tell it, the answer starts in the team’s network stacks.

“The winning direction today is understanding what kind of problem are you trying to solve. Engineers are all interested in solving problems, but my mantra for a while has been ‘make sure you’re solving the right problem and not just the first one that comes along,’” Geoff Willis, Mercedes-AMG Petronas Motorsport’s former technical director and the team’s newly minted digital, engineering, and transformation director, tells Ars.

“With the top teams, there’s much less trial and error and more predictive understanding. So before we go to a race like here in Austin, we’ve done weeks and weeks of simulations of how to set the car up; drivers have done simulations in it, too. We have a good picture of what to expect, so what we look for when we get here: ‘Is there anything that alerts us to the car not behaving as we expect?’ If so, then we have a lot of what if studies to rely on.”

The ability to recognize and address reliability issues swiftly was certainly the theme when Ars got the opportunity to tour the Mercedes garage ahead of this weekend’s race. That invitation didn’t come from Mercedes, rather it came from Pure Storage, the California company that partnered with the carmaker early in 2016 to bring flash storage both to the factory and trackside. Network gear may seem like only a small piece of Mercedes’ winning puzzle, but the IT-minded on pit row quickly stressed how important their new storage solution can be.

Pure Storage's teaser video outlining the Mercedes partnership.

Simple logistics

Bottom-line numbers made the switch to Pure Storage flash arrays an easy decision for Mercedes, especially considering that hard disk drives were still in vogue within F1’s last decade. So in a sport where garage size can vary week to week (with Austin being on the smaller end: 2.5 Austin garages would fit in the Abu Dhabi one, according to the team), the new devices save a tremendous amount of space. Matt Harris, Mercedes' head of IT, says the team reduced the size of its networking stacks by nearly 70 percent, enough to make up the device cost with only two years of freight savings. “If you keep the weight down and save on cost, you can invest in other performance areas,” says Christian Dixon, a partnership manager on the Mercedes team. “And the more room we can save, the more equipment we can bring.”

More important than physical logistics improvement, however, the Pure Storage arrays helped Mercedes store and access its whopping amount of data more efficiently. Pure Storage says its technology minimizes the amount of data needed to be stored in a location two times more efficiently than its competitors, and (crucially for motorsport) it can transmit data in real time. As you might expect, the Mercedes team has needs more urgent and much larger than the Exchange archives of your average office space.

“Think of the cars as sensors going around the track, picking up info on acceleration, vibrations, pressures, temperatures—we have over 200 sensors on the car,” Dixon says. “We record over 100 times a second with 1,000 channels of data—we’re creating 1.8 billion data points.”

“And we generate 500GB in a race weekend, not just from the car but from everything we do,” Harris adds. “In fact the processing power of the car is the biggest problem—if the processor was faster, we could get data off faster. But now we have to compromise by weighing speed of offloading, speed of turnaround for the car to make decisions, and how much data we want to generate.” (Harris notes the ECU processor, dating back to 2009, is practically the only thing on the car that hasn’t radically changed in recent years.)

Trackside, Harris says 30 or so teammates are dedicated to looking at the data, and updating their systems from relying on legacy servers to the Pure Storage arrays has enabled those datawatchers to act more quickly. “[With the old system], they knew it’d be one to two minutes to open the file, read through the data, and make a decision,” he says. “Opening the wrong bit of data would add time. Now, Pure brings the process down—you can actually make the wrong decision on which piece of data to open without compromising the next run of the car.”

For a real-world example of this new infrastructure supporting the on-track efforts, Harris points to this year’s race in Singapore. Valtteri Botta, Mercedes’ other world-class driver, kept telling the team he felt a cut in the engine. “But the guys kept saying, ‘No you’re not, you’re not,’” Harris says. “But they had to keep getting more refined on the data to see it; it ended up being a 13,000th of a second and Valtteri could feel it. It was a magnetic field the bridge created.”

The future, where ML meets Mercedes

As you may guess based on their recent history, the Mercedes team is already thinking extensively about where data analysis and storage need to be in the F1 future. To that end, Harris says, the team has started toying with ways to leverage modern machine-learning and artificial intelligence techniques, too. At their factory back in Brackley, England, they rely on Pure Storage Flash Blades (a scalable, parallel storage solution) to store all simulation results and historic data. Mercedes then combines that with another partnership, this one with a company called Tibco that produces software capable of leveraging machine learning for big data analytics.

“We always knew collecting data was a good thing, but we weren’t using it efficiently—it was hard to know what you want to find out and what’s useful to do,” Dixon says.

“So we asked, ‘How can we get rid of the normal data?” Harris continues. “We still keep that on a filer, but we don’t have to waste our time to look at it if it’s normal. What you want is abnormal data—is it abnormal because we made a change, or is there an issue, or is some kind of trend happening? We wanted to start automating the search for some of that since there’s only so many sets of eyes. These machine-learning, deep-learning techniques we’re beginning to look at it—and we are new to it, though learning fast—what we can start doing is immense.”

Willis has been in the sport for decades, much of that time as a technical director across various title-winning teams. He says collecting and understanding data is the area with the biggest gap between successful and unsuccessful F1 teams these days. So just as he helped encourage the team to embrace computer simulations and models once upon a time, today he’s also championing machine-learning adoption within Mercedes.

“I’m not sure whether to say F1 is slow to the party, but we’re just starting to apply this to a lot of areas. We have a handful of machine learning projects in very different areas: race strategy, testing, analysis of software, analysis of component failures,” he says. “Ultimately, it’ll lead to better decision-making. We have lots of data, but you have to do something to categorize it and know where it is before it becomes knowledge. When you then have that knowledge and understand how it all fits together; that’s the real driver for performance in F1.”

Listing image by Mercedes-AMG Petronas Motorsport

Read the full article here by Ars Technica

Facebook s'allie à Paypal pour les envois d'argent via Messenger

En voilà un partenariat peu commun ! Facebook, grand 35-tonnes des réseaux sociaux et Paypal, leader incontesté de paiement sécurisé sur le net, se sont alliés afin d'offrir la possibilité de payer directement par Messenger, l'application de chat du géant bleu (pas Intel cette fois-ci !). Avec un montant de 24 milliards de dollars échangés en 2017, une fraction suffirait au succès de cette alliance. En cas de problème, un bot est disponible afin de régler automatiquement la plupart des requêtes courantes (remise à zéro du mot de passe, litige, aides). À noter qu'il faudra néanmoins un compte Paypal (et bien évidemment un compte Facebook) pour bénéficier de cette fonctionnalité.


Pour le moment, cette dernière n'est par ailleurs disponible qu'au royaume pays de Trump ; mais nul doute qu'en l'absence de scandale, le vieux continent sera également atteint. Il faut dire que rassembler discussions et paiements au sein d'une même application, cela de quoi faciliter la vie des usagers... et des statisticiens analysant les causes/dépendances/autres caractéristiques de ces paiements ; ce qui n'est pas sans rappeler une certaine application chinoise.


Plus encore qu'auparavant, il faudra surveiller de près les l'utilisation de ce service (sans pour autant tomber dans la paranoïa) afin que nos règlements de comptes entre amis ne deviennent pas une mine d'or pour d'autres.


paybook logo

Le CDH offre ce logo 100% authentique pour la version française !

Read the full article here by Le comptoir du hardware

Sunday 22 October 2017

Gran Turismo Sport review: A brilliant, but very new, direction for the series

reader comments 0

Game Details

Developer: Polyphony Digital
Publisher: Sony
Platform: PS4
Release Date: October 17th
Price: $59.99
Links: Official website 

Gran Turismo Sport is a great racing game. What it's not is a simple PS4 port of the last GT game. Almost everything about this latest release is different from every game that has come before it in the series. There are way fewer selectable cars than the competition (and previous GT games). There aren't many tracks. You won't spend hours buying new parts for your car or taking it for an oil change or a car wash. Gran Turismo Sport might not be the world’s most accurate driving simulation, but it’s fun—a lot of fun, particularly with a steering wheel. And refreshingly, it doesn't try to make you open your wallet to unlock anything.

But if racing against other humans online isn't something you care for, GTS is not the game for you. Unlike GT games of old, GTS is all about racing online, and maybe—just maybe—becoming a real racing driver at the end.

You can count on some game franchises to release new titles with metronomic regularity. Not so for the irregular and infrequent installments from Kazunori Yamauchi and his team Polyphony Digital. Deadlines were always a problem with the series, horribly exacerbated by the "nightmare" caused by the PS3 and its Cell processor. But now there's a new GT for the PS4, packed full of super-high definition and virtual reality. By my count, it's the 16th game (including the two-wheeled Tourist Trophy) in a series that dates back two decades with over 70 million copies sold.

Pretty much all of those games followed the same formula. You earn credits in races, then you spend those riches buying and tuning cars to win more races. Gran Turismo Sport bucks that trend in ways that make it quite clear why the developer didn't name the game Gran Turismo 7 (even if Yamauchi thinks that name would have worked).

Late to online, but early to e-sports?

The focus here is very much on the online experience, an emblematic example of an industry trend away from big-budget single-player games. The online focus is present to the extent that the game is quite limited when it can't speak to its servers. This has generated a lot of consternation in certain corners, and anyone looking for an updated version of the excellent GT6 is going to be disappointed. There are only 170-odd cars, and only one of those is a Mazda MX-5 Miata. The solo "Campaign" mode will probably keep you busy for a week, if that. And a lot of favorite tracks are missing—although the 'Ring is still present and correct.

Polyphony Digital was late to the whole idea of online multiplayer racing, only adding it in for 2008's GT5: Prologue. In some ways, GTS can be seen as making up for lost time.

But the series was way ahead of its time in taking the venture seriously enough to boost it from being a mere game to an e-sport. Back in 2008, and together with Nissan, Sony created the GT Academy, which has used the game to find promising talent to turn into actual racing drivers. GT Academy did a lot to legitimize racing games among the professional racing crowd, and e-sports features are now standard issue for any racing title released in 2017. (It's a must-have for actual racing series like Formula E and Formula 1.)

In GTS, that online competition is the core of the game. Polyphony Digital has even partnered with the Fédération Internationale de l'Automobile, or FIA, the body that organizes world motorsports and series like Formula 1. The two got together in 2014 with the idea that an FIA-approved game could earn players a "digital" FIA racing license that may translate to the real world. Racing licenses are required by sanctioning bodies to compete in their series, and they aren't cheap to obtain, which is a major roadblock for many aspiring racers. However, it's still a bit early to see how this "digital" version will pan out; we asked Sony if there is any fresh info but were told to expect news about licenses in the next few weeks.

Officially sanctioned online racing is the game's Sport mode. There are a number of regular Daily Races, which take place at 20-minute intervals throughout the day. Then there are two different FIA series and a Polyphony Digital Championship. Those three all get underway in early November, so our only experience is currently with the daily races, each of which is a 15-minute affair on a relatively short track suited to beginners. (Right now, that's the Brands Hatch Indy circuit, the East Course at Suzuka, and the fictional Northern Isle Speedway oval, although they will change periodically.)

Read the full article here by Ars Technica

Thursday 19 October 2017

F1 su Netflix nel 2018: trattative in corso

Una partnership tra la Formula 1 e Netflix potrebbe concretizzarsi già in vista del 2018. La massima categoria dell’automobilismo potrebbe...

Read the full article here by

AlphaGo Zero Goes From Rank Beginner to Grandmaster in Three Days—Without Any Help

In the 1970 sci-fi thriller Colossus: The Forbin Project, a computer designed to control the United States’ nuclear weapons is switched on, and immediately discovers the existence of a Soviet counterpart.

The two machines, Colossus and Guardian, trade equations, beginning with  “1 + 1 = 2.”  The math moves faster and faster, advancing through calculus and beyond until suddenly the blurry cascade of symbols stops. The two machines have become one, and it has mankind by the throat.

Hah, you say. Development work takes a lot longer than that.

Maybe not. Today DeepMind, a London-based subsidiary of Google, announced that it has developed a machine that plays the ancient Chinese game of Go much better than its predecessor, AlphaGo, which last year beat Lee Sedol, a world-class player, in Seoul.

The earlier program was trained for months on a massive database of master games and got plenty of pointers—training wheels, as it were—from its human creators. Then it improved further by playing countless games against itself. But the new one, called AlphaGo Zero, received no training wheels; it trained itself all the way from tyro to grandmaster.

In three days.

After a few more days of training, the new machine was pitted against the old one in games held at the standard tournament speed of two hours per player, per game. AlphaGo Zero won by 100 games to zero.

To understand the new system, we must first review last year’s version. It has three parts: a search algorithm, a Monte Carlo simulator, and two deep neural networks

Search algorithms dictate the moves within computer chess programs. The algorithm begins by listing every possible move, then every possible rejoinder, and so on, generating a tree of analysis. Next, it uses a second algorithm to evaluate the final position on each branch of the tree. Finally, it works its way back to select the move that leads to the best outcome should the opponent also play the best moves. Search is of only limitred value in Go, because it’s so hard to evaluate final positions, a problem explained in IEEE Spectrum 10 years ago by Feng Hsiung-Hsu, who programmed IBM’s Deep Blue, which defeated then-World Chess Champion Garry Kasparov.

Monte Carlo simulation instead generates great numbers of possible games to get an idea of how often a given move leads to good results. This is what financial planners do when they take the known statistical variance for stocks, bonds, and inflation and use it to generate far more alternative histories than the actual periods for which good records exist. Other Go programmers had already tried this method, with decent results, as described in Spectrum in 2014 by Jonathan Shaeffer, Martin Miller, and Akihiro Kishimoto.

Deep neural networks were applied to Go for the first time by DeepMind’s engineers, led by ​CEO Demis​ ​Hassabis​ ​and​ ​David​ ​Silver. On top of search and Monte Carlo, their original AlphaGo system used two networks, one trained to imitate the play of masters, as exemplified in a huge database of games, and another to evaluate positions. Then the program played millions of times against itself to improve beyond the level of mere human players.

DeepMind calls such self-training reinforced learning, and AlphaGo Zero relied on this technique alone. No search or Monte Carlo here. The machine played itself repeatedly, looking only at the board and at the black and white stones that take their places, move by move, at the intersections of 19 vertical and 19 horizontal lines. And it used one neural network rather than two. 

“After 72 hours, we evaluated AlphaGo Zero against the exact version... that defeated Lee Sedol, under the same 2-hour time controls and match conditions that were used in the man-machine match in Seoul,” write Silver, Hassabis and their co-authors today in Nature. “AlphaGo Zero used a single machine with 4 tensor processing units (TPUs), whereas AlphaGo Lee was distributed over many machines and used 48 TPUs. AlphaGo Zero defeated AlphaGo Lee by 100 games to 0.”

Don’t get the idea that this stuff is easy. The authors explain their work with a jungle of symbols reminiscent of the conversation between Guardian and Colossus. Here’s a sample:

“MCTS may be viewed as a self-play algorithm that, given neural
network parameters θ and a root position s, computes a vector of search
probabilities recommending moves to play, π = αθ(s), proportional to
the exponentiated visit count for each move, πa ∝ N(s, a)1/τ, where τ is
a temperature parameter.”

A chart shows the difference in power consumption between multiple generations of AlphaGo. Image: DeepMind    AlphaGo's power consumption has lowered with each generation.

I looked it up: “Temperature” is a concept derived from statistical mechanics

To a Go player, the result is a mixture of the familiar and the strange. In a commentary in Nature, Andy Okun and Andrew Jackson of the American Go Association write: “At each stage of the game, it seems to gain a bit here and lose a bit there, but somehow it ends up slightly ahead, as if by magic.”

And, the commentators add, the machine’s self-taught methods in the early and later parts of the game confirm the lore that grandmasters have accumulated over centuries of play. “But some of its middle-game judgements are truly mysterious.”

The DeepMind researchers discovered another bit of weirdness. When they had a neural network train itself to predict the moves of expert players it did very well, though it took a bit longer to reach the standard of a system trained with human supervision. However, the self-trained network played better overall, suggesting “that AlphaGo Zero may be learning a strategy that is qualitatively different to human play.”

Different and better. Toward the end of Colossus: The Forbin Project, the computer says, “What I am began in Man's mind, but I have progressed further than Man.”

But before we welcome our new overlords, a splash of cold water may be in order. And there is no better supplier of such coolant for AI hype than Rodney Brooks, who recently wrote for Spectrum on the limitations of self-driving cars.

In his keynote at the IEEE TechEthics Conference, held on Friday in Washington, D.C., Brooks said he’d asked the creators of the original AlphaGo how it would have fared against Lee Sedol if, at the last minute, the board had been enlarged by 10 lines, to 29 x 29. They told him the machine couldn’t have managed even if the board had been shrunk by a single line, to 18 x 18. That’s how specialized these deep neural networks are.

Brooks showed the audience a photo that Google’s AI system had labeled as a scene of people playing frisbee. “If a person had done this,” Brooks said, “we’d assume that he’d know a lot of other things about frisbees—that he could tell us whether a three-month-old can play frisbee, or whether you can eat a frisbee. Google’s AI can’t!”

AlphaGo Zero can’t tell us that Go is harder than checkers, that it involves occupying territory rather than checkmating the opponent’s king, or that it is a game. It can’t tell us anything at all. 

But no human can stand against it.

Read the full article here by Computing: IEEE Spectrum

Wednesday 18 October 2017

Research Risks

The 1919 Great Boston Molasses Flood remained the deadliest confectionery containment accident until the Canadian Space Agency's 2031 orbital maple syrup delivery disaster.

Read the full article here by

Intel proposes a mathematical formula for self-driving car safety

As autonomous vehicles become a part of the landscape, there are more questions than ever about their safety, and how to determine responsibility when they get in an accident. With so many companies (Alphabet, Uber, GM, Tesla and Ford -- just to name a few) working on different technology, there's also a question of how to establish standards on any level. Now, Amnon Shashua, the CEO of (recently acquired by Intel) Mobileye is proposing a model called Responsibility Sensitive Safety to "prove" the safety of autonomous vehicles.

Safe Distance Formula

In practice, the AV needs to know two things:

  • Safe State: This is a state where there is no risk that the AV will cause an accident, even if other vehicles take unpredictable or reckless actions.
  • Default Emergency Policy: This is a concept that defines the most aggressive evasive action that an AV can take to maintain or return to a Safe State.

We coin the term Cautious Command to represent the complete set of commands that maintains a Safe State. RSS sets a hard rule that the AV will never make a command outside of the set of Cautious Commands. This ensures that the planning module itself will never cause an accident.

The paper (PDF) backing this model up tries to establish some baseline equations (the setup pictured above is just one of the situations) for self-driving cars to make sure they will behave safely. That includes in situations where it's possible for pedestrians to appear from behind another car, or, for example, making sure they're not following another car too closely. The point is to make sure that autonomous vehicles don't cause collisions, even if it may not prevent all collisions, and be able to establish blame in a crash that happens without a human driver.

Source: Intel, Arxiv (PDF)

Read the full article here by Engadget

Tuesday 17 October 2017

Alibaba and Red Hat Partner to Take RHEL-as-a-Service to China's Largest Cloud

Alibaba and Red Hat Partner to Take RHEL-as-a-Service to China's Largest Cloud

Oct 17, 2017, 11:00 (0 Talkback[s]) (Other stories by Christine Hall)

WEBINAR: On-demand Event

Replace Oracle with the NoSQL Engagement Database: Why and how leading companies are making the switch REGISTER >

Although Red Hat in the cloud is already available in China, the new arrangement is important because it makes the company's software portfolio available on the largest cloud in the largest of the emerging markets. This benefits existing customers with expansion plans that include what is now the world's second largest economy. It also promises to generate revenue from inside the country.

Complete Story

Read the full article here by Linux Today

Pixel 2 Has Google's First In-House Chipset: Pixel Visual Core

The Pixel 2 and Pixel 2 XL had a secret Image Processing Unit (IPU) that none of us knew about until now: Google has just announced that both phones contain their first custom-designed System on Chip (SoC), called Pixel Visual Core. In the coming weeks and months, Google will enable the Pixel Visual Core to help bring the Pixel camera’s HDR+ magic to third party camera apps.

The goal here is to bring the Pixel 2’s HDR+ smarts to others outside of the Google Camera app. The SoC will do so by taking advantage of its eight Google-designed custom cores that can deliver "over 3 trillion operations per second on a mobile power budget." By using the Pixel Visual Core, Google says that HDR+ can run 5x faster and at less than 1/10th the energy than if it was trying to through the application processor (like the Snapdragon 835 in these phones).


Read the full article here by [H]ardOCP News/Article Feed

Gif: falling...


Read the full article here by Likecool

Keyboard latency

If you look at “gaming” keyboards, a lot of them sell for $100 or more on the promise that they’re fast. Ad copy that you’ll see includes:

  • a custom designed keycap that has been made shorter to reduce the time it takes for your actions to register
  • 8x FASTER - Polling Rate of 1000Hz: Response time 0.1 milliseconds
  • Wield the ultimate performance advantage over your opponents with light operation 45g key switches and an actuation 40% faster than standard Cherry MX Red switches
  • World’s Fastest Ultra Polling 1000Hz
  • World’s Fastest Gaming Keyboard, 1000Hz Polling Rate, 0.001 Second Response Time

Despite all of these claims, I can only find one person who’s publicly benchmarked keyboard latency and they only tested two keyboards. In general, my belief is that if someone makes performance claims without benchmarks, the claims probably aren’t true, just like how code that isn’t tested (or otherwise verified) should be assumed broken.

The situation with gaming keyboards reminds me a lot of talking to car salesmen:

Salesman: this car is super safe! It has 12 airbags! Me: that’s nice, but how does it fare in crash tests? Salesman: 12 airbags!

Sure, gaming keyboards have 1000Hz polling, but so what?

Two obvious questions are:

  1. Does keyboard latency matter?
  2. Are gaming keyboards actually quicker than other keyboards?

Does keyboard latency matter?

A year ago, if you’d asked me if I was going to build a custom setup to measure keyboard latency, I would have said that’s silly, and yet here I am, measuring keyboard latency with a logic analyzer.

It all started because I had this feeling that some old computers feel much more responsive than modern machines. For example, an iMac G4 running macOS 9 or an Apple 2 both feel quicker than my 4.2 GHz Kaby Lake system. I never trust feelings like this because there’s decades of research showing that users often have feelings that are the literal opposite of reality, so got a high-speed camera and started measuring actual keypress-to-screen-update latency as well as mouse-move-to-screen-update latency. It turns out the machines that feel quick are actually quick, much quicker than my modern computer – computers from the 70s and 80s commonly have keypress-to-screen-update latencies in the 30ms to 50ms range out of the box, whereas modern computers are often in the 100ms to 200ms range when you press a key in a terminal. It’s possible to get down to the 50ms range in well optimized games with a fancy gaming setup, and there’s one particularly impressive consumer device that can easily get below that. But in general, the default experience is much slower. Modern computers have much better throughput, but their latency isn’t so great.

Anyway, at the time I did these measurements, my 4.2 GHz kaby lake had the fastest single-threaded performance of any machine you could buy but had worse latency than a quick machine from the 70s (roughly 6x worse than an Apple 2), which seems a bit curious. To figure out where the latency comes from, I started measuring keyboard latency because that’s the first part of the pipeline. My plan was to look at the end-to-end pipeline and start at the beginning, ruling out keyboard latency as a real source of latency. But it turns out keyboard latency is significant! I was surprised to find that the median keyboard I tested has more latency than the entire end-to-end pipeline of the Apple 2. If this doesn’t immedately strike you as absurd, consider that an Apple 2 has 3500 transistors running at 1MHz and an Atmel employee estimates that the core used in a number of high-end keyboards today has 80k transistors running at 16MHz. That’s 20x the transistors running at 16x the clock speed – keyboards are often more powerful than entire computers from the 70s and 80s! And yet, the median keyboard today adds as much latency as the entire end-to-end pipeline as a fast machine from the 70s.

Let’s look at the measured keypress-to-USB latency on some keyboards:

keyboard latency
connection gaming
apple magic (usb) 15 USB FS
MS natural 4000 20 USB
das 3 25 USB
logitech k120 30 USB
filco majestouch 30 USB
dell OEM 30 USB
powerspec OEM 30 USB
razer ornata chroma 35 USB FS Yes
olkb planck rev 4 40 USB FS
ergodox 40 USB FS
MS comfort 5000 40 wireless
easterntimes i500 50 USB FS Yes
kinesis advantage 50 USB FS
genius luxemate i200 55 USB
topre type heaven 55 USB FS
logitech k360 60 “unifying”

The latency measurements are the time from when the key starts moving to the time when the USB packet associated with the key makes it out onto the USB bus. Numbers are rounded to the nearest 5 ms in order to avoid giving a false sense of precision. The easterntimes i500 is also sold as the tomoko MMC023.

The connection column indicates the connection used. USB FS stands for the usb full speed protocol, which allows up to 1000Hz polling, a feature commonly advertised by high-end keyboards. USB is the usb low speed protocol, which is the protocol most keyboards use. The ‘gaming’ column indicates whether or not the keyboard is branded as a gaming keyboard. wireless indicates some kind of keyboard-specific dongle and unifying is logitech’s wireless device standard.

We can see that, even with the limited set of keyboards tested, there can be as much as a 45ms difference in latency between keyboards. Moreover, a modern computer with one of the slower keyboards attached can’t possibly be as responsive as a quick machine from the 70s or 80s because the keyboard alone is slower than the entire response pipeline of some older computers.

That establishes the fact that modern keyboards contribute to the latency bloat we’ve seen over the past forty years. The other half of the question is, does the latency added by a modern keyboard actually make a difference to users? From looking at the table, we can see that among the keyboard tested, we can get up to a 40ms difference in average latency. Is 40ms of latency noticeable? Let’s take a look at some latency measurements for keyboards and then look at the empirical research on how much latency users notice.

There’s a fair amount of empirical evidence on this and we can see that, for very simple tasks, people can percieve latencies down to 2ms or less. Moreover, increasing latency is not only noticable to users, it causes users to execute simple tasks less accurately. If you want a visual demonstration of what latency looks like and you don’t have a super-fast old computer lying around, check out this MSR demo on touchscreen latency.

Are gaming keyboards faster than other keyboards?

I’d really like to test more keyboards before making a strong claim, but from the preliminary tests here, it appears that gaming keyboards aren’t generally faster than non-gaming keyboards.

Gaming keyboards often claim to have features that reduce latency, like connecting over USB FS and using 1000Hz polling. The USB low speed spec states that the minimum time between packets is 10ms, or 100 Hz. However, it’s common to see USB devices round this down to the nearest power of two and run at 8ms, or 125Hz. With 8ms polling, the average latency added from having to wait until the next polling interval is 4ms. With 1ms polling, the average latency from USB polling is 0.5ms, giving us a 3.5ms delta. While that might be a significant contribution to latency for a quick keyboard like the Apple magic keyboard, it’s clear that other factors dominate keyboard latency for most keyboards and that the gaming keyboards tested here are so slow that shaving off 3.5ms won’t save them.


Most keyboards add enough latency to make the user experience noticeably worse, and keyboards that advertise speed aren’t necessarily faster. The two gaming keyboards we measured weren’t faster than non-gaming keyboards, and the fastest keyboard measured was a minimalist keyboard from Apple that’s marketed more on design than speed.

Previously, we’ve seen that terminals can add significant latency, up 100ms in mildly pessimistic conditions. In a future post, we’ll look at the entire end-to-end pipeline to see other places latency has crept in and we’ll also look at how some modern devices keep latency down.

Appendix: where is the latency coming from?

A major source of latency is key travel time. It’s not a coincidence that the quickest keyboard measured also has the shortest key travel distance by a large margin. video setup I’m using to measure end-to-end latency is a 240 fps camera, which means that frames are 4ms apart. When videoing “normal” keypresses and typing, it takes 4-8 frames for a key to become fully depressed. Most switches will start firing before the key is fully depressed, but the key travel time is still significant and can easily add 10ms of delay (or more, depending on the switch mechanism). Contrast this to the Apple keyboard measured, where the key travel is so short that it can’t be captured with a 240 fps camera, indicating that the key travel time is < 4ms.

Note that, unlike the other measurement I was able to find online, this measurement was from the start of the keypress instead of the switch activation. This is because, as a human, you don’t activate the switch, you press the key. A measurement that starts from switch activiation time misses this large component to latency. If, for example, you’re playing a game and you switch from moving forward to moving backwards when you see something happen, you have pay the cost of the key movement, which is different for different keyboards. A common response to this is that “real” gamers will preload keys so that they don’t have to pay the key travel cost, but if you go around with a high speed camera and look at how people actually use their keyboards, the fraction of keypresses that are significantly preloaded is basically zero even when you look at gamers. It’s possible you’d see something different if you look at high-level competitive gamers, but even then, just for example, people who use a standard wasd or esdf layout will typically not preload a key when going from back to forward. Also, the idea that it’s fine that keys have a bunch of useless travel because you can pre-depress the key before really pressing the key is just absurd. That’s like saying latency on modern computers is fine because some people build gaming boxes that, when run with unusually well optimzed software, get 50ms response time. Normal, non-hardcore-gaming users simply aren’t going to do this. Since that’s the vast majority of the market, even if all “serious” gamers did this, that would stll be a round error.

The other large sources of latency are scaning the keyboard matrix and debouncing. Neither of these delays are inherent – keyboards use a matrix that has to be scanned instead of having a wire per-key because it saves a few bucks, and most keyboards scan the matrix at such a slow rate that it induces human noticable delays because that saves a few bucks, but a manufacturer willing to spend a bit more on manufacturing a keyboard could make the delay from that far below the threshold of human perception. See below for debouncing delay.

Appendix: counter-arguments to common arguments that latency doesn’t matter

Before writing this up, I read what I could find about latency and it was hard to find non-specialist articles or comment sections that didn’t have at least one of the arguments listed below:

Computers and devices are fast

The most common response to questions about latency is that input latency is basically zero, or so close to zero that it’s a rounding error. For example, two of the top comments on this slashdot post asking about keyboard latency are that keyboards are so fast that keyboard speed doesn’t matter. One person even says

There is not a single modern keyboard that has 50ms latency. You (humans) have that sort of latency.

As far as response times, all you need to do is increase the poll time on the USB stack

As we’ve seen, some devices do have latencies in the 50ms range. This quote as well as other comments in the thread illustrate another common fallacy – that input devices are limited by the speed of the USB polling. While that’s technically possible, most devices are nowhere near being fast enough to be limited by USB polling latency.

Unfortunately, most online explanations of input latency assume that the USB bus is the limiting factor.

Humans can’t notice 100ms or 200ms latency

Here’s a “cognitive neuroscientist who studies visual perception and cognition” who refers to the fact that human reaction time is roughly 200ms, and then throws in a bunch more scientific mumbo jumbo to say that no one could really notice latencies below 100ms. This is a little unusual in that the commenter claims some kind of special authority and uses a lot of terminology, but it’s common to hear people claim that you can’t notice 50ms or 100ms of latency because human reaction time is 200ms. This doesn’t actually make sense because there are independent quantities. This line of argument is like saying that you wouldn’t notice a flight being delayed by an hour because the duration of the flight is six hours.

Another problem with this line of reasoning is that the full pipeline from keypress to screen update is quite long and if you say that it’s always fine to add 10ms here and 10ms there, you end up with a much larger amount of bloat through the entire pipeline, which is how we got where we are today, where can buy a system with the CPU that gives you the fastest single-threaded performance money can buy and get 6x the latency of a machine from the 70s.

It doesn’t matter because the game loop runs at 60 Hz

This is fundamentally the same fallacy as above. If you have a delay that’s half the duration a clock period, there’s a 50% chance the delay will push the event into the next processing step. That’s better than a 100% chance, but it’s not clear to me why people think that you’d need a delay as long as the the clock period for the delay to matter. And for reference, the 45ms delta between the slowest and fastest keyboard measured here corresponds to 2.7 frames at 60fps.

Keyboards can’t possibly response faster more quickly than 5ms/10ms/20ms due to debouncing

Even without going through contortions to optimize the switch mechanism, if you’re willing to put hysteresis into the system, there’s no reason that the keyboard can’t assume a keypress (or release) is happening the moment it sees an edge. This is commonly done for other types of systems and AFAICT there’s no reason keyboards couldn’t do the same thing (and perhaps some do). The debounce time might limit the repeat rate of the key, but there’s no inherent reason that it has to affect the latency. And if we’re looking at the repeat rate, imagine we have a 5ms limit on the rate of change of the key state due to introducing hysteresis. That gives us one full keypress cycle (press and release) every 10ms, or 100 keypresses per second per key, which is well beyond the capacity of any human. You might argue that this introduces a kind of imprecision, which might matter in some applications (music, rythym games), but that’s limited by the switch mechanism. Using a debouncing mechanism with hysteresis doesn’t make us any worse off than we were before.

Apologies for not explaining terminology here, but I think that anyone making this objection should understand the explanation :-).

Appendix: experimental setup

The USB measurement setup was a USB cable that was cut open to expose the wires, connected to a logic analyzer. The exact model of logic analzyer doesn’t really matter, but if you’re curious about the details, this set of experiments used a salae pro.

The start-of-input was measured by pressing two keys at once – one key on the keyboard and a button that was also connected to the logic analyzer. This introduces some jitter as the two buttons won’t be pressed at exactly the same time. To calibrate the setup, we used two identical buttons connected to the logic analyzer. The median jitter was < 1ms and the 90%-ile jitter was roughly 5ms. This is enough that tail latency measurements for quick keyboards aren’t really possible with this setup, but average latency measurements like the ones done here seem like they should be ok. The input jitter could probably be reduced to a negligible level by building a device to both trigger the logic analyzer and press a key on the keyboard under test at the same time. Average latency measurements would also get better with such a setup (because it would be easier to run a large number of measurements).

If you want to know the exact setup, a E-switch LL1105AF065Q switch was used. Power and ground were supplied by an arduino board. There’s no particular reason to use this setup. In fact, it’s a bit absurd to use an entire arduino to provide power, but this was done with spare parts that were lying around and this stuff just happened to be stuff that RC had in their lab, with the exception of the switches. There weren’t two identical copies of any switch, so we bought a few switches so we could do calibration measurements with two identical switches. The exact type of switch isn’t important here; any low-resistance switch would do.

Tests were done by pressing the z key and then looking for byte 29 on the USB bus and then marking the end of the first packet containing the appropriate information. But, as above, any key would do.

I would love to test more keyboards! If you’re in the NYC area and have a keyboard that I can borrow for a test, please contact me! I’m also looking for more phones and machines to measure and would love to run a quick benchmark with a high-speed camera if you have a machine or device that’s not on this list! If you’re not in the area and want to donate a keyboard for me to test, feel free to mail the keyboard to

Dan Luu
Recurse Center
455 Broadway, 2nd Floor
New York, NY 10013

The setup I currently have can only get measurements from devices that present themselves as standard USB keyboards devices, but I might try to decode keyboards that use other protocols over USB and I might also try to build a setup that can handle ps/2 sometime.

This project was done with help from Wesley Aptekar-Cassels, Leah Hanson, and Kate Murphy. BTW, Wesley is looking for work. In addition to knowing “normal” programming stuff, he’s also familiar with robotics, controls, electronics, and general “low-level” programming.

Thanks to RC, Ahmad Jarara, Raph Levien, Peter Bhat Harkins, Brennan Chesley, Dan Bentley, Kate Murphy, Christian Ternus, and Dan Puttick, for letting us use their keyboards for testing.

Thanks for Leah Hanson, Mark Feeney, and Zach Allaun for comments/corrections/discussion on this post.

Read the full article here by Dan Luu

“Ferrero” non solo cioccolato, dal 2018 arriveranno i gelati Kinder  

Ferrero annuncia il suo ingresso nel settore dei gelati, in partnership con Unilever. Nel 2018 sarà lanciata la linea Kinder Ice Cream nei mercati di Francia, Germania, Austria e Svizzera. Il colosso di Alba fornirà il marchio alla multinazionale anglo-olandese, che si occuperà della produzione e distribuzione della nuova linea di prodotti, in quat...

Read the full article here by

Monday 16 October 2017

Optical Illusion Floor Tiles 'To Prevent People From Running Down Hallway'

This is the optical illusion ceramic tile floor installed by UK-based tile flooring manufacturer Casa Ceramica in the hallway to their showroom.  They claim they had a problem with people running down the hallway and wanted to do something to prevent it, Cool idea. The floor is made up of 400 individual tiles and the optical illusion only works from the perspective seen above...(Read...)

Read the full article here by Likecool

Microsoft to Roll-out Anti-cheating Tech with Windows 10 Fall Creators Update

With its upcoming "Fall Creators Update" for Windows 10, Microsoft is preparing to roll out its own game anti-cheating platform, under two new technologies, TruePlay and Game Monitor. TruePlay provides a "new set of tools to combat cheating within their PC games," according to Microsoft. This is similar to VAC (Valve Anti-Cheating). From the looks of it, the TruePlay API is limited to games built for the UWP (Universal Windows Platform), such as recent additions to the Forza franchise. Game Monitor is another side of this coin. When enabled, the operating system shares system information with games to weed out cheating tools such as aimbots. Enabling it could soon become a requirement of certain online multiplayer games.

Games with TruePlay run in a "protected" (read: sandboxed) process, which mitigates a class of common cheating tools, as the game's real PID is never exposed to other processes. A separate Windows process will be on constant lookout for behaviors and manipulations that are common in cheating scenarios. Data (read: telemetry) of this process will be shared with game developers after determining that cheating could have occurred. The "Fall Creators Update" for Windows 10 is likely to be released before December.

Sources: MSDN, Forums

Read the full article here by techPowerUp!

Projects You Can Help With For Advancing Open-Source NVIDIA "Nouveau" Graphics

Longtime Nouveau contributor Karol Herbst has been working on an updated list of project ideas for new contributors or those that may be wanting to participate in an Endless Vacation of Code / Google Summer of Code.

They haven't updated their project ideas list in a while so Karol took it upon himself to begin thinking of new Nouveau project ideas and discussing these ideas on the Nouveau mailing list. Among the areas where Nouveau is in use of extra help includes:

- Better handling of out-of-memory situations with vRAM like more swapping to system memory or allow for memory usage reporting to user-space.

- Tracking down more performance bottlenecks within Nouveau.

- Constructing a Nouveau Vulkan driver.

- Finishing up Nouveau's limited OpenCL compute support right now so it can pass the OpenCL Conformance Test Suite.

- Experimental NIR support for making use of this intermediate representation used by Intel's driver, beginning to be used in RadeonSI, and also by Freedreno/VC4.

- More compiler optimizations.

- The big messy task of working on re-clocking improvements.

More details via

this Nouveau thread

. If you are looking to get involved with open-source NVIDIA GPU driver development, hop on their mailing list or IRC channel: details on their

Wiki site


Read the full article here by Phoronix

Hands-on & More With Huawei's Mate 10 and Mate 10 Pro: Kirin 970 Meets Artificial Intelligence

This morning Huawei is taking the wraps off of their latest generation flagship smartphone, the Mate 10 series. Powered by subsidiary HiSilicon’s Kirin 970 SoC, the new phones are a mix of something old and something new for the company. With a design that is not simply a carbon copy of the earlier Mate phones but is still very much a traditional smartphone, Huawei’s latest flagships are a mix of old and new; tried and true paired with the cutting edge. It’s an interesting balancing act, and one that, if consumers agree, will further bolster Huawei’s success in the international smartphone market while at the same time pushing a nascent technology to the forefront of the mobile industry.

That technology is, of course, artificial intelligence, which has become the buzzword for the latter half of this decade in the world of technology. Long a lofty goal of computer science – if not perhaps its holy grail – recent advancements in the field have opened the door to new methods and new applications. And while this era of neural networking-driven AI is not by any means producing devices that actually think like a human, even this weak form of AI is, in the right use cases, far more capable than anything that has come before it.

Of course, the usefulness of having neural networking hardware is only as good as the appications that run on it, and in these still-early days of the field, the industry as a whole is trying to figure out what those applications should be. Having a self-driving car or a smart NPC in a video game makes sense, but applying it to a smartphone is confusing at first. Huawei announced that its new Kirin 970 chipset had dedicated silicon for running artificial intelligence networks, and the Mate 10 series is going to be the first device running this chip. Today, they announced the smartphones and unveiled the features.

The Mate 10, Mate 10 Pro, and Mate 10 Porsche Design

The devices themselves are part of Huawei’s yearly cadence with the Mate series. Every year at around this time we see a new smartphone SoC and the first two devices that power it: the Mate and the Mate Pro. Both the hardware and the design are meant to be iterative – Huawei’s HiSilicon division takes the ‘best’ IP available from ARM to develop the processor, and the design team takes cues from the industry as to what will be the next statement in aesthetics.

One of the big trends for 2017 (and moving into 2018) is full-screen display technology. In previous years, manufacturers have often quoted ‘screen-to-body’ ratios in order to show how much of the face of the device is taken up by screen, but it is this year that has started to push the boundaries on this aspect. Arguably devices such as Xiaomi’s MI MIX range were instrumental in pushing this, but the upside is more screen for everyone or the same sized screen in smaller devices. Huawei is pushing the screen with its ‘FullView Display’ (the marketing name for it).

The Mate 10 comes with a 5.9-inch FullView display, using a glass front for the 2560x1440 LCD display, coming in at 499 pixels per inch. Huawei is quoting panels capable of a 1500:1 contrast ratio, while the color space is listed at a less-than-useful metric of 96% NTSC.

The Mate 10 Pro (and Porsche Design) are slightly bigger with their 6.0-inch displays, although this time it comes with an OLED screen at 2160x1080 resolution. This is a lower pixel density (402 ppi) and resolution compared to the regular Mate 10, but is rated at 112% NTSC and 7000:1 contrast. The smaller resolution and use of OLED might also assist in battery life as well, and overall the unit is lighter than the Mate 10.

Neither device goes to the extreme with the display completely covering the front, as it requires certain methods of moving the internals such as the camera (on the bottom on the MI MIX, on the notch on the iPhone X) as well as how to implement fingerprint technology. One of the biggest design deviations for this generation of Mate devices is that the Mate 10 regular edition now has the fingerprint sensor on the front of the phone, rather than the rear. In my eyes this is a pretty big jump, given that the Mate S, the Mate 8 and the Mate 9 regular editions all had fingerprint sensors on the rear. The Mate 10 Pro, by contrast, does keep the sensor on the rear.

This pre-production unit hasn't updated the logo

There is no difference between each of the devices for the SoC inside, with each device getting the full-fat Kirin 970. This means four ARM Cortex A73 cores at 2.36 GHz and four ARM Cortex A53 cores at 1.8 GHz. These are paired with Mali G72 MP12 graphics (at an unstated frequency), the i7 sensor processor, and Huawei’s new Neural Processing Unit, or NPU (more on this later).  All of the units will use Huawei’s latest Category 18 integrated LTE modem, capable of 1.2 Gbps download using 4x4 MIMO on 3-carrier aggregation with 256-QAM. Each device supports dual-SIM LTE concurrently (along with dual-SIM VoLTE), although this limits downloads to Category 16. Uploads are at Category 13.

Only one option for memory and storage is available with the Mate 10, with Huawei settling on 4GB of LPDDR4X DRAM and 64GB of NAND for storage, with microSD card support further augmenting that, though by taking one of the SIM slots. For some reason it says limited to 256GB, though I will ask about the new 400GB microSD cards.

The Mate 10 Pro will be available in 4GB/64GB and 6GB/128GB versions, although the latter will be dependent on region – we are told around 20 countries are on the initial list. The Mate 10 Porsche Design model will be only available in a 6GB/256GB configuration, similar to last year.

All the devices come with the typical dual-band 802.11ac Wi-Fi support, extending to BT4.2, and will include NFC. All three devices use USB Type-C, but only the base model has a headphone jack. Despite the Mate 10 Pro/PD being physically bigger than the standard Mate 10, all three devices use a 4000 mAh battery which is TUV certified for SuperCharge. That in itself is fairly large for a modern flagship, which is perhaps a benefit of only a few smartphone companies now competing in the ‘under 7mm’ metric for thickness. The Huawei devices come in at 8.2mm and 7.9mm for that.

The cameras on all the devices are identical as well, with Huawei further leveraging the Leica band cooperation. The front camera is an 8MP f/2.0 unit, while the rear camera does something a little bit different this time around. The dual camera is vertical, like the Mate 10, but without the extra protective shroud around the lenses. The cameras are similar 12MP RGB and 20MP monochrome, as found on last year’s flagships, although this time they are both f/1.6 and using Leica SUMMILUX-H lenses with AI-powered bokeh. This allows for ‘2x hybrid zoom’ (which we established last year is more like a crop than a zoom), but the phones also have 4-way focus (PDAF, CAF, Laser, Depth) and have a dual LED flash.

Huawei will launch these devices on Android 8, using their custom implementation called EMUI. Last generation was EMUI 5, and this generation will be called EMUI 8. The reason for the jump is two-fold: the number 8 is a highly positive number in Chinese culture, but also it addresses some comments as to why the EMUI numbering system was ‘behind’ the Android version. Huawei intends to keep EMUI’s version number paired with the Android version for the foreseeable future.

Huawei Mate 10 Series
  Mate 10 Mate 10 Pro Mate 10 Porsche Design
SoC HiSilicon Kirin 970

4x Cortex-A53 @ 1.84GHz
4x Cortex-A73 @ 2.36GHz
ARM Mali-G72 MP12 @ ?
Display 5.9-inch 2560x1440
6.0-inch 2160x1080
Dimensions 150.5 x 77.8 x 8.2 mm
186 grams
154.2 x 74.5 x 7.9 mm
178 g
4/6 GB
6 GB
NAND 64 GB (UFS 2.1)
+ microSD
64/128 GB (UFS 2.1)
+ microSD
256 GB (UFS 2.1)
+ microSD
IP Rating IP53 IP67
Battery 4000 mAh (15.28 Wh)
Front Camera 8MP, 1/2.0"
Rear Camera Color: 12MP, 1/1.6
Monochrome: 20MP, f/1.6

PDAF + Laser AF + Contrast AF + Depth,
OIS, HDR, dual-tone LED flash
Modem HiSilicon LTE (Integrated)
2G / 3G / 4G LTE
Category 18/16 Download
Category 13 Upload
SIM Size 2x NanoSIM (dual standby)
Wireless 802.11a/b/g/n/ac, BT 4.2 LE, NFC, IrLED, GPS/Glonass/Galileo/BDS
Connectivity USB 2.0 Type-C, 3.5mm headset
Fingerprint Sensor Front Rear
Launch OS Android 8.0 with EMUI 8.0
Launch Price 699 Euro (4/64) 799 Euro (6/128) 1349 Euro
US Price ~$549-$599 ~$649-$699
Some* fingers

*may be more than some

Pricing for the Mate 10 and Mate 10 Pro is likely to mirror the pricing for last year’s flagships. This means around $549-$599 for the regular edition and $649-$699 for the Pro. Add in another $100 for the higher capacity model, and probably another $250-$400 for the Porsche Design version. (Updated in table)

Read the full article here by AnandTech Article Channel

[Updated] Microsoft has patched Windows against the KRACK Wi-Fi vulnerability -- Google will secure Android soon

Earlier today, news broke about the KRACK vulnerability that affects the WPA2 protocol. Security researchers have warned that the problem affects millions of devices running everything from Windows to Android and Linux. Microsoft has announced that it has already released a security patch to fix the vulnerability in Windows. Google says that a patch for affected Android devices will be released "in the coming weeks." See also: KRACK warning: Severe WPA2 security vulnerability leaves millions of devices open to attack Android: DoubleLocker ransomware encrypts data and changes PINs In a statement to the Verge, Microsoft said: "We have released a… [Continue Reading]

Read the full article here by Betanews

Google and IBM launch open-source security tool for containers

Google and IBM, together with a few other partners, released an open-source project that gathers metadata that developers can use to secure their software. According to an IBM blog post, the goal of the project is to help developers keep security standards, while microservices and containers cut the software supply chain. Grafeas was built as a container security API. IBM, on the other hand, will add its own container scanning tool, Vulnerability Advisor, into Grafeas. IBM said it will be a "central source of truth," when it comes to enforcing security policies. The API will collect the metadata defining a user’s software… [Continue Reading]

Read the full article here by Betanews

Severe flaw in WPA2 protocol leaves Wi-Fi traffic open to eavesdropping

Aurich Lawson / Thinkstock

reader comments 56

An air of unease set into the security circles on Sunday as they prepared for the disclosure of high-severe vulnerabilities in the Wi-Fi Protected Access II protocol that make it possible for attackers to eavesdrop Wi-Fi traffic passing between computers and access points.

The proof-of-concept exploit is called KRACK, short for Key Reinstallation Attacks. The research has been a closely guarded secret for weeks ahead of a coordinated disclosure that's scheduled for 8 a.m. Monday, east coast time. An advisory the US CERT recently distributed to about 100 organizations described the research this way:

US-CERT has become aware of several key management vulnerabilities in the 4-way handshake of the Wi-Fi Protected Access II (WPA2) security protocol. The impact of exploiting these vulnerabilities includes decryption, packet replay, TCP connection hijacking, HTTP content injection, and others. Note that as protocol-level issues, most or all correct implementations of the standard will be affected. The CERT/CC and the reporting researcher KU Leuven, will be publicly disclosing these vulnerabilities on 16 October 2017.

According to a researcher who has been briefed on the vulnerability, it works by exploiting a four-way handshake that's used to establish a key for encrypting traffic. During the third step, the key can be resent multiple times. When it's resent in certain ways, a cryptographic nonce can be reused in a way that completely undermines the encryption.

A Github page belonging to one of the researchers and a separate placeholder website for the vulnerability used the following tags:

  • WPA2
  • key reinstallation
  • security protocols
  • network security, attacks
  • nonce reuse
  • handshake
  • packet number
  • initialization vector
  • Researchers briefed on the vulnerabilities said they are indexed as: CVE-2017-13077, CVE-2017-13078, CVE-2017-13079, CVE-2017-13080, CVE-2017-13081, CVE-2017-13082, CVE-2017-13084, CVE-2017-13086, CVE-2017-13087, CVE-2017-13088. One researcher told Ars that Aruba and Ubiquiti, which sell wireless access points to large corporations and government organizations, already have updates available to patch or mitigate the vulnerabilities.

    The vulnerabilities are scheduled to be formally presented in a talk titled Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2 scheduled for November 1 at the ACM Conference on Computer and Communications Security in Dallas. It's believed that Monday's disclosure will be made through the site The researchers presenting the talk are Mathy Vanhoef and Frank Piessens of KU Leuven and imec-DistriNet, Maliheh Shirvanian and Nitesh Saxena of the University of Alabama at Birmingham, Yong Li of Huawei Technologies in Düsseldorf, Germany, and Sven Schäge of Ruhr-Universität Bochum in Germany. The researchers presented this related research in August at the Black Hat Security Conference in Las Vegas.

    The vast majority of existing access points aren't likely to be patched quickly, and some may not be patched at all. If initial reports are accurate that encryption bypass exploits are easy and reliable in the WPA2 protocol, it's likely attackers will be able to eavesdrop on nearby Wi-Fi traffic as it passes between computers and access points. It might also mean it's possible to forge Dynamic Host Configuration Protocol settings, opening the door to hacks involving users' domain name service.

    It wasn't possible to confirm the details reported in the CERT advisory or to assess the severity at the time this post was going live. If eavesdropping or hijacking scenarios turn out to be easy to pull off, people should avoid using Wi-Fi whenever possible until a patch or mitigation is in place. When Wi-Fi is the only connection option, people should use HTTPS, STARTTLS, Secure Shell and other reliable protocols to encrypt Web and e-mail traffic as it passes between computers and access points. As a fall-back users should consider using a virtual private network as an added safety measure, but users are reminded to choose their VPN providers carefully, since many services can't be trusted to make users more secure. This post will be updated as more information becomes available.

Read the full article here by Ars Technica

Saturday 14 October 2017

WD is developing 40TB hard drives powered by microwaves

Western Digital (WD) may have lost a bid to buy Toshiba's flash memory technology, but is still hard at work on its bread-and-butter hard drives. The company has unveiled a breakthrough called microwave-assisted magnetic recording (MAMR) that will allow ever-higher disk capacities, up to 40TB by the year 2025. "Commercialization of MAMR technology will pave the way to higher recording densities and lower cost per terabyte hard disk drives," said VP of research John Rydning in a statement.

If you're wondering what microwaves have to do with hard drives, WD has a developed a new type of drive head called a "spin torque oscillator" that generates a microwave field. That allows data to be written to magnetic media at a lower magnetic field than with conventional disks, making it possible to pack more bits into the same space.

"As a result, Western Digital's MAMR technology is now ready for prime time, and provides a more cost-effective, more reliable solution," the company said in a technical brief, adding that "MAMR also has the capability to extend areal density gains up to 4 Terabits per square inch." As with its current enterprise drives, WD's MAMR drives will use helium instead of air to reduce internal turbulence.

So how "ready for prime time" is it? Western Digital says MAMR-based drives for data centers will appear in the market starting in 2019, and it will produce 40TB 3.5-inch disks by 2025, with "continued expansion beyond that timeframe." WD didn't say what capacity early MAMR drives would pack, but it recently released its first 14TB drive via its HGST (formerly Hitachi) subsidiary, so we'd expect the MAMR variants to go beyond that.

Mechanical hard disk don't have nearly the speed or reliability of SSDs, but the cost per gigabyte is multiple times lower. That's crucial for data centers and cloud storage firms, especially since data-hungry AI software is becoming more and more pervasive. Don't expect to see MAMR drives in your local media backup (NAS) drives right away, but it should trickle down fairly soon, giving you enough storage for future 8K HDR videos.

Source: Western Digital

Read the full article here by Engadget

Four short links: 13 October 2017

Deep Learned Faces, $900 Selfie Camera, MSFT+AMZN AI, Rich Text Editor

  1. Generating Faces with Deconvolution Networks -- scroll down and watch the videos even if you don't read the text on how it was done. The "illegal" face section is hypnotic, particularly the randomly changing parameters ... it's like the subject is struggling to be a face. (via Matt Webb)
  2. Casio Is Selling $900 Selfie Cameras in China -- “We think that we can still be competitive in the digital camera market if we can make the purpose of the camera clear,” Niida says. “We don’t sell cameras with interchangeable lenses, so we haven’t been trying to sell cameras to people who are really into taking photos as a hobby. Rather than that, we want to focus on people who want to take beautiful photos more easily and who want to keep their memories.” Buried in there: The TR series is the result of aggressive focus on Casio’s part. It’s aimed at a particular demographic: young women in the Chinese-speaking world, and as such the facial recognition software is tuned specifically toward them. Yes, white guys, this isn't for you.
  3. Gluon -- Microsoft and Amazon join forces on an AI library, hoping to unseat (or at least rival) Google's tools.
  4. Trix -- A Rich Text Editor for Everyday Writing.

Continue reading Four short links: 13 October 2017.

Read the full article here by Four Short Links - O'Reilly Media

Friday 13 October 2017

Discord launches Verified servers for game developers, publishers

Discord is teaming up with various game developers and publishers to offer players the option to belong to their official Discord channels.

These servers will be run completely by community managers and other employees associated with the developers and publishers, Discord confirmed to Polygon. Each server will be moderated by the publisher or developer’s own team.

Playerunknown’s Battlegrounds, Minecraft, Supercell and Offworld are some of the games that currently have verified Discord servers. Checkmarks will appear alongside the appointed server’s name to designate that it is run by an official account.

Discord profile pop up A photo of what the new pop-up will look like for Discord users trying to find verified servers. Discord
Discord PUBG server An example of what the invitation to verified servers will look like. Discord

A representative from Discord told Polygon that the servers will mainly be used as a way for developers and publishers to share updates with players and engage with feedback. Think of it like a game’s page on Steam or developer blog, but with the ability for instantaneous conversation.

Verified servers will also allow developers to encourage non-hostile conversation around their games. With the ability to self-moderate, they’ll have the chance to remove heated discussion — including harassing or abusive language — from their server. Discord has confirmed in the past that it doesn’t moderate channels, but the service keeps an eye on un-verified servers to ensure that users are following the terms of service and user guidelines.

Discord has dealt with a slew of troublesome servers in the past, but a representative for the platform told Polygon that it is looking into and deals=ing with reported servers are intenrally. Verified servers seems like one way of steering players of specific games to conversational windows that dedicated community managers can handle.

Developers or publishers who want to sign up for a verified server on Discord can do so on the official site.

Read the full article here by Polygon

Thursday 12 October 2017

Nissan modified this GT-R to be driven with a PS4 controller

reader comments 0

A few years back, I wrote a feature titled "Why you’ll never drive your car with a joystick." Today, I learned I spoke too soon (and that Ars' Creative Director Aurich Lawson is prophetic).

Over in the UK, Nissan and a company called JLB Design have finally done it. Courtesy of Carbuyer, I learned of a tie-in with the soon-to-be-released Gran Turismo Sport, wherein JLB converted a Nissan GT-R to be controlled by a DualShock 4 controller rather than the normal steering wheel and pedals. What's more, it's a completely remote-controlled GT-R—renamed the GT-R/C—as demonstrated by Nissan racing driver Jann Mardenborough, who drove the car around the Silverstone racing circuit from the passenger seat of a helicopter circling above.

As you might imagine, the modifications were quite extensive. A RaceLogic VBOX system sent Mardenborough telemetry in the helicopter. Six computers in the trunk translated inputs from the PS4 controller and actuated the GT-R/C's steering, brakes, throttle, and transmission. And a pair of redundant safety systems were installed, allowing two different operators the ability to apply full brakes and cut the engine in the event of Mardenborough losing control of the GT-R/C.


The choice of Mardenborough as the driver is an obvious one, as he was one of the first winners of the Nissan Playstation GT Academy, a competition that's been turning gamers into professional racing drivers since 2008. Speaking with Carbuyer, he sounded rather enthused after setting a fastest lap of 1:17.47 around Silverstone's 1.6-mile (2.6km) National Circuit:

"This was once-in-a-lifetime, truly epic stuff," Mardenborough said. "The GT-R /C has brought my two worlds together—the virtual of gaming and the reality of motorsport—in a way I never thought possible. The response from the car when using the controller was far more engaging than I thought it would be. JLB Design has done an incredible job at making everything respond really well.

Steering, acceleration and braking were all intelligently configured, allowing for controlled application so I could really get a feel through the corners and hold it steady down the fast straights. Driving a full-size, remote-control GT-R to 131mph at Silverstone whilst chasing it down in a helicopter was an unforgettable experience."

Gran Turismo Sport is due to be released on October 17—barring any unforeseen six-year delays.

Listing image by Nissan

Read the full article here by Ars Technica

Wednesday 11 October 2017

Telemetry: OxygenOS secretly collects user data, sending OnePlus a wealth of information

OxygenOS, the customized version of Android used by OnePlus on its smartphones, has been found to be collecting data about users -- and it's not anonymized. Telemetry is something that has been associated with Windows 10, but now the Chinese smartphone manufacturer has its fans concerned. That a phone collects certain information about usage is not particularly unusual -- it helps to identify problems and speed up software development. But a security researcher's discovery that his OnePlus 2 was sending highly detailed information back to OnePlus without consent has set privacy alarm bells ringing (the issue also affects more recent… [Continue Reading]

Read the full article here by Betanews

Tuesday 10 October 2017

Playstation 3 Emulator Supporting Up to 10K Resolution

If you are one of those guys that likes to run a Playstation 3 emulator on your box, so that you can enjoy all those old games, RPCS3 now support rendering up to 10K. As you might guess, this can make some huge visual differences in game. I have started the video below at the Demon's Soul section which is most impressive. Vulkan API seems to be doing some very special things as well. Currently there are only 10 games supported. Thanks cageymaru. Rendering a game in high resolutions such as 4k, while beautiful, is quite taxing on your hardware and there is often a massive hit in performance. However, since most of the workload for RPCS3 is on the CPU and GPU usage is low, there is a lot of untapped performance just waiting to be used. All processing is done CPU side, and as far as the GPU is concerned it is simply rendering 2006 era graphics (yes, the PS3 is 11 years old now). We're happy to report that anyone with a dedicated graphics card that has Vulkan support can expect identical performance at 4k. Discussion

Read the full article here by [H]ardOCP News/Article Feed