Thursday, 2 November 2017

PlayerUnknown's Battlegrounds Ditches Amazon Web Services for Microsoft Azure


Manu_PT said:
Game type? You talk like pubg is the ultimate cpu bound game or what? You even know what a raid on WoW is? Go figure.

Pubg is just an unoptimized crap that will die as soon as the big sharks enter the battle royale scene. Fortnite already stole a lot of players and it isnt even a good game. We talk in 1 year (battlefield BR, cod br, blizzard br etc etc)

Well Fortnite has graphics that are damn near equivalent to modded Minecraft, also made by the same studio who made the engine for the game. Bluehole originally had to do some work arounds to limitations of UE4, since it is not meant for that type of game. But they took a chance because it was a non-proprietary engine with a ton of documentations, so anything they needed to do could be done relatively quickly. Epic Games has made a ton of changes to UE4 to accomodate for the demands of a battle royal game when they made fortnite. Bluehole will get those changes when the official update to the engine is sent out to all studios using it.

Of course if a AAA studio dives into battle royal genre they will likely beat out pubg, but those AAA studios also have far more talent, and resources at their disposal. PUBG started from a studio with less than 40 developers. It expanded to over 100 within 6 months of it being in early access and has made leaps and bounds in client performance.

Pubg isnt that unoptimized, the only people i see even mentioning those words these days also have computers from what is considered now as the stone age, an expect devs to bend over backwards for them and their dual core rigs with GPUs from 5 years ago.

Again this game is early access. Only has about a year and a half of development time under its belt. It has improved far more in such short amount of time in comparison to H1Z1 that has been in early access for 4 years and they have made damn near zero progress on that particular title.

Simply put, your expectations for an unfinished product is stupidly high.



Read the full article here by techPowerUp!

Feral Announces F1 2017 Linux System Requirements

Feral Interactive has now published the system requirements for

their Linux port of F1 2017

with this racing game set to be released on Thursday.

As previously mentioned,

this racing game is using Vulkan

and in fact

exclusively uses Vulkan

(ala no OpenGL).

Feral has taken to

Reddit

this morning to confirm the final system requirements for the Linux port:

To play F1 2017, you will need a 3.3Ghz Intel Core i3-3225 processor running Ubuntu 17.04, 4GB RAM and a 2GB Nvidia 680 or 3rd Generation AMD Graphics Core Next (Volcanic Islands) graphics card or better*.

For best performance, we recommend a 3.5Ghz Intel Core i5-6600K, 8GB RAM and an 8GB Nvidia 1070 graphics card or better.

*Intel graphics cards are not supported. Nvidia graphics cards require drivers 384.90 or later. AMD graphics cards require Mesa drivers 17.2.2.

So basically a GeForce GTX 680 with the NVIDIA binary blob or any AMD GCN 1.2 Volcanic Islands GPU or better, basically those cards by default using AMDGPU DRM, like the Radeon R9 285, and newer. But Feral recommends the GeForce GTX 1070 or better for best performance.

The important bit for Radeon RADV users is being with Mesa 17.2.2 or newer. It doesn't appear AMDGPU-PRO is supported.

Under Windows, F1 2017 needs a GTX 460 / HD 5870 or better but recommends a GTX 1070 / RX 480. So the Linux requirements for F1 2017 are more stiff than under Windows and does rely upon Vulkan rather than D3D11.

We will be having plenty of Vulkan F1 2017 Linux NVIDIA/Radeon benchmarks on launch day.



Read the full article here by Phoronix

My VM is lighter (and safer) than your container

My VM is lighter (and safer) than your container Manco et al., SOSP’17

Can we have the improved isolation of VMs, with the efficiency of containers? In today’s paper choice the authors investigate the boundaries of Xen-based VM performance. They find and eliminate bottlenecks when launching large numbers of lightweight VMs (both unikernels and minimal Linux VMs). The resulting system is called LightVM and with a minimal unikernel image, it’s possible to boot a VM in 4ms. For comparison, fork/exec on Linux takes approximately 1ms. On the same system, Docker containers start in about 150ms.

These results are obtained when the LightVM guest is a unikernel. You’re probably only going to create a unikernel in specialised cases. (One interesting such case in the paper is a Micropython-based unikernel that can be used to support serverless function execution). The authors also create an automated build system called TinyX for creating minimalistic Linux VM images targeted at running a single application. If we look at boot times for a TinyX VM as compared to Docker, the performance is very close up to about 250 VMs/containers per core.

Beyond that point, Docker starts to edge it, since even the idle minimal Linux distribution created by TinyX does run some occasional background tasks.

How does Xen scale with number of VMs, and where are the bottlenecks?

As the following chart shows, the biggest single factor limiting the scalability and performance of virtualisation is the size of the guest VMs. To produce that chart, a unikernel VM was booted from ramdisk, with varying sizes of binary objects injected into the uncompressed image file. So all the effects are due to image size.

So if we want fast booting, we know that image size is going to matter. We’ve looked at unikernels on The Morning Paper before, and they give you the smallest possible guest image. In this paper, the authors use Mini-OS to create a variety of unikernels, including the ‘daytime’ unikernel implementing a TCP servic that returns the current time. This is 480KB uncompressed, and runs in 3.6MB of RAM. This unikernel is used to test the lower bound of memory consumption for possible VMs.

Making your own unikernel image based on Mini-OS is probably more work than many people are prepared to do though, so the authors also created Tinyx.

Tinyx is an automated build system that creates minimalistic Linux VM images targeted at running a single application. The tool builds, in essence, a VM consisting of a minimalistic, Linux-based distributed along with an optimized Linux kernel. It provides a middle point between a highly specialized unikernel, which has the best performance but requires porting of applications to a minimalistic OS, and a full-fledged general-purpose OS VM that supports a large number of applications out of the box but incurs performance overheads.

Tinyx creates kernel images that are half the size of typical Debian kernels, and have significantly smaller runtime memory usage (1.6MB for Tinyx vs 8MB for Debian).

Using the small VM images thus obtained, we can probe the behaviour of Xen itself when launching lots of VMs. When launching 1000 guests, here are the boot and create times for Debian minimal install), Tinyx, MiniOS (unikernel) and for comparison on the same hardware: Docker containers and simple process creation.

As we keep creating VMs, the creation time increases noticeably (note the logarithmic scale): it takes 42s, 10s and 700ms to create the thousandth Debian, Tinyx, and unikernel guest, respectively.

As the size of the VM decreases, the creation time is responsible for ever larger portions of the overall time taken to get to availability. To understand where all the time was going, the team instrumented Xen to reveal this picture:

XenStore interaction and device creation dominate. Of these, the device creation overhead is fairly constant, but the XenStore overhead grows superlinearly.

The design of LightVM

Our target is to achieve VM boot times comparable to process startup times. Xen has not been engineered for this objective, as the results in the previous section show, and the root of these problems is deeper than just inefficient code. For instance, one fundamental problem with the XenStore is its centralized, filesystem-like API which is simply too slow for use during VM creation and boot, requiring tens of interrupts and privilege domain crossings.

I bet it was hard to conceive of anyone launching 1000 guest VMs when that design was first created!

LightVM redesigns the Xen control plane with a lean driver called noxs (for ‘no XenStore’) that replaces the XenStore and allows direct communication between front-end and back-end drivers via shared memory.

LightVM also keeps on hand a pool of pre-prepared VM shells, through which all the processing common to all VMs is done in the background. When a VM creation command is issued, a suitable shell fitting the VM requirements is taken from the pool and only the final initialisation steps such as loading the kernel image into memory and finalising device initialisation need to be done.

Device creation in standard Xen ends up calling bash scripts, which is a slow process. LightVM replaces this with a binary daemon that executes a pre-defined setup with no forking or bash scripts.

Performance

We saw the boot times for LightVM with a variety of images at the start of this post. Furthermore, LightVM can save a VM in around 30ms, and restore it in 20ms. Standard Xen needs 128ms and 550ms respectively.

Unikernel memory usage is fairly close to Docker containers. Tinyx needs more, but only 22GB more across 1000 guests. That’s a small fraction of the RAM of current servers.

CPU usage for VMs can also be on a par with containers, so long as the VMs are trimmed to include only the necessary functionality:

Use cases

The authors present four different use cases where LightVM + lightweight VMs can shine.

In all the following scenarios, using containers would help performance but weaken isolation, while using full-blown VMs would provide the same isolation as lightweight VMs, but with poorer performance.

  1. Personal firewalls per mobile user, running in mobile gateways at or near cellular base stations (mobile edge computing – MEC). Here a ClickOS unikernel image is used, and 8000 firewalls can be run on a 64-core AMD machine with 10ms boot times. A single machine running LightVM at the edge in this way can run personalized firewalls for all users in a cell without becoming a bottleneck.
  2. Just-in-time service instantiation in mobile edge computing (similar to JITSU).
  3. High-density TLS termination at CDNs, which requires the long term secret key of the content provider. Hence strong isolation between different content provider’s proxies is desirable.
  4. Creation of a lightweight compute service such as AWS Lambda. For this use case they use a Micropython-based unikernel to run computations written in Python. It takes about 1.3ms to boot and start executing a function. When the system is deliberately stressed with more requests arriving than the test machine can cope with, service time goes up fairly linearly until about 800 VMs.

The use cases we presented show that there is a real need for lightweight virtualization, and that it is possible to simulataneously achieve both good isolation and performance on par or better than containers.

A recent post on the Google Cloud Platform blog, ‘Demystifying container vs VM-based security: security in plaintext’ provides an interesting perspective on container security and isolation, from a company that have been running a container-based infrastructure for a very long time.



Read the full article here by the morning paper

Google will no longer offer its flight price data to other sites

Google is getting serious about its role as a travel service. Come April 10, 2018 the search giant will shut down access to its QPX Express API that's used by the likes of Kayak and Orbitz for airfare data, as spotted by Hacker News. This could seriously affect those sites' customers. Travel listings are one thing, but if you've stored a ton of reward points with one and it isn't able to fetch flight times and prices for your a trip next summer, that could cause some headaches.

As The Verge and TechCrunch note, there are data alternatives for travel websites including Fareportal and Skypicker. TechCrunch reports that Google is also closing down ITA's flight-search app as of next month. Mountain View recently launched its Trips app as a way to gather all your travel data from Gmail and present it in a mobile-friendly way, so yeah, you could say this isn't a passing interest for the company. Why? Well, mining travel data will give it another avenue for pushing ads your way.

At the same time, it'll push people toward using Google's own flight searches and tracking; it's a move that's good for Google and basically no one else. Five years ago, Google bought ITA Software, the company behind an airfare search system known as QPX, which is used by Bing Travel and CheapTickets, and airlines including American and United. At the time the Department of Justice approved the acquisition under the condition that Google would keep the API public for five years. That five years is now over.

Via: Hacker News, TechCrunch

Source: Google



Read the full article here by Engadget

Monday, 30 October 2017

F1 | Dalla Svizzera: Ericsson e Leclerc piloti Sauber 2018

Secondo quanto riportato dal quotidiano elvetico Blick, la Sauber avrebbe scelto i propri piloti per la prossima stagione. L’accoppiata sarà...

Read the full article here by FormulaPassion.it

Thursday, 26 October 2017

French Duo Decodes Bitcoin QR From TV Broadcast

In an interview on French television, entrepreneur Roger Ver offered $1,000 worth of bitcoins (about 3 bitcoins at the time) to viewers. The only catch was the QR code was blurred out during most of the broadcast. Fortunately, two enterprising hackers took it upon themselves and noticed the code wasn't sufficiently blurred in parts of the video segment. The duo scanned and decoded the missing portions of the QR code, calculating 2.1 million possible combinations narrowing it down to two valid keys matching the bitcoin private keys. "The money is the cherry on the cake but the most exciting was when we found the private key and the algorithm told us this was the one," says Mr Sassano, adding that he has not yet withdrawn the money. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Tuesday, 24 October 2017

Android-x86 7.1-rc2 Now Supports NVMe SSDs, Better QEMU VirGL

The Android-x86 project derived from Google's Android Open-Source Project code-base remains officially at Android 6.0, but there is an Android 7.1 "Nougat" build available for testing.

Quietly released earlier this month was a second release candidate based on Android 7.1. The Android-x86 7.1-RC2 release is based on upstream AOSP 7.1-RC2 / Nougat-MR2 along with some extra improvements for this x86-targeted build.

Further improvements with Android-x86 7.1-rc2 are SDCardFS support, SwiftShader as a new software renderer for systems lacking GPU acceleration, better QEMU VirGL support for 3D acceleration in guest VMs, support for NVMe solid-state storage devices, Mesa 17.1.10 support, improved native bridge support, and is updated to the Linux 4.9.54 kernel.

More details on this new Android-x86 7 "Nougat" test release via

Android-x86.org

. Sadly, no indications of an Android-x86 8 "Oreo" release coming any sooner with these x86-focused builds continuing to lag further behind against upstream AOSP.



Read the full article here by Phoronix

Monday, 23 October 2017

Mercedes handles the competition because it knows how to handle data, too


reader comments 1

AUSTIN, Texas—History happened Sunday at the Circuit of the Americas. Formula 1 driver Lewis Hamilton won for the fifth time in six years at Austin, inching him closer to a fourth world championship this year. And on a macro scale, Hamilton’s victory sealed a fourth straight Formula One constructors’ championship for the Silver Arrows team at Mercedes. According to ESPN, that makes Mercedes the first team to win consecutive championships across a major regulation change.

How does a team achieve such sustained dominance—Mercedes has won a staggering 51 of 59 total races between 2014 and 2016—in an era where the sport has witnessed an infusion of more money, more engineering talent, and more of those aforementioned regulations? If you listen to members of the Mercedes-AMG Petronas Motorsport tech team tell it, the answer starts in the team’s network stacks.

“The winning direction today is understanding what kind of problem are you trying to solve. Engineers are all interested in solving problems, but my mantra for a while has been ‘make sure you’re solving the right problem and not just the first one that comes along,’” Geoff Willis, Mercedes-AMG Petronas Motorsport’s former technical director and the team’s newly minted digital, engineering, and transformation director, tells Ars.

“With the top teams, there’s much less trial and error and more predictive understanding. So before we go to a race like here in Austin, we’ve done weeks and weeks of simulations of how to set the car up; drivers have done simulations in it, too. We have a good picture of what to expect, so what we look for when we get here: ‘Is there anything that alerts us to the car not behaving as we expect?’ If so, then we have a lot of what if studies to rely on.”

The ability to recognize and address reliability issues swiftly was certainly the theme when Ars got the opportunity to tour the Mercedes garage ahead of this weekend’s race. That invitation didn’t come from Mercedes, rather it came from Pure Storage, the California company that partnered with the carmaker early in 2016 to bring flash storage both to the factory and trackside. Network gear may seem like only a small piece of Mercedes’ winning puzzle, but the IT-minded on pit row quickly stressed how important their new storage solution can be.

Pure Storage's teaser video outlining the Mercedes partnership.

Simple logistics

Bottom-line numbers made the switch to Pure Storage flash arrays an easy decision for Mercedes, especially considering that hard disk drives were still in vogue within F1’s last decade. So in a sport where garage size can vary week to week (with Austin being on the smaller end: 2.5 Austin garages would fit in the Abu Dhabi one, according to the team), the new devices save a tremendous amount of space. Matt Harris, Mercedes' head of IT, says the team reduced the size of its networking stacks by nearly 70 percent, enough to make up the device cost with only two years of freight savings. “If you keep the weight down and save on cost, you can invest in other performance areas,” says Christian Dixon, a partnership manager on the Mercedes team. “And the more room we can save, the more equipment we can bring.”

More important than physical logistics improvement, however, the Pure Storage arrays helped Mercedes store and access its whopping amount of data more efficiently. Pure Storage says its technology minimizes the amount of data needed to be stored in a location two times more efficiently than its competitors, and (crucially for motorsport) it can transmit data in real time. As you might expect, the Mercedes team has needs more urgent and much larger than the Exchange archives of your average office space.

“Think of the cars as sensors going around the track, picking up info on acceleration, vibrations, pressures, temperatures—we have over 200 sensors on the car,” Dixon says. “We record over 100 times a second with 1,000 channels of data—we’re creating 1.8 billion data points.”

“And we generate 500GB in a race weekend, not just from the car but from everything we do,” Harris adds. “In fact the processing power of the car is the biggest problem—if the processor was faster, we could get data off faster. But now we have to compromise by weighing speed of offloading, speed of turnaround for the car to make decisions, and how much data we want to generate.” (Harris notes the ECU processor, dating back to 2009, is practically the only thing on the car that hasn’t radically changed in recent years.)

Trackside, Harris says 30 or so teammates are dedicated to looking at the data, and updating their systems from relying on legacy servers to the Pure Storage arrays has enabled those datawatchers to act more quickly. “[With the old system], they knew it’d be one to two minutes to open the file, read through the data, and make a decision,” he says. “Opening the wrong bit of data would add time. Now, Pure brings the process down—you can actually make the wrong decision on which piece of data to open without compromising the next run of the car.”

For a real-world example of this new infrastructure supporting the on-track efforts, Harris points to this year’s race in Singapore. Valtteri Botta, Mercedes’ other world-class driver, kept telling the team he felt a cut in the engine. “But the guys kept saying, ‘No you’re not, you’re not,’” Harris says. “But they had to keep getting more refined on the data to see it; it ended up being a 13,000th of a second and Valtteri could feel it. It was a magnetic field the bridge created.”

The future, where ML meets Mercedes

As you may guess based on their recent history, the Mercedes team is already thinking extensively about where data analysis and storage need to be in the F1 future. To that end, Harris says, the team has started toying with ways to leverage modern machine-learning and artificial intelligence techniques, too. At their factory back in Brackley, England, they rely on Pure Storage Flash Blades (a scalable, parallel storage solution) to store all simulation results and historic data. Mercedes then combines that with another partnership, this one with a company called Tibco that produces software capable of leveraging machine learning for big data analytics.

“We always knew collecting data was a good thing, but we weren’t using it efficiently—it was hard to know what you want to find out and what’s useful to do,” Dixon says.

“So we asked, ‘How can we get rid of the normal data?” Harris continues. “We still keep that on a filer, but we don’t have to waste our time to look at it if it’s normal. What you want is abnormal data—is it abnormal because we made a change, or is there an issue, or is some kind of trend happening? We wanted to start automating the search for some of that since there’s only so many sets of eyes. These machine-learning, deep-learning techniques we’re beginning to look at it—and we are new to it, though learning fast—what we can start doing is immense.”

Willis has been in the sport for decades, much of that time as a technical director across various title-winning teams. He says collecting and understanding data is the area with the biggest gap between successful and unsuccessful F1 teams these days. So just as he helped encourage the team to embrace computer simulations and models once upon a time, today he’s also championing machine-learning adoption within Mercedes.

“I’m not sure whether to say F1 is slow to the party, but we’re just starting to apply this to a lot of areas. We have a handful of machine learning projects in very different areas: race strategy, testing, analysis of software, analysis of component failures,” he says. “Ultimately, it’ll lead to better decision-making. We have lots of data, but you have to do something to categorize it and know where it is before it becomes knowledge. When you then have that knowledge and understand how it all fits together; that’s the real driver for performance in F1.”

Listing image by Mercedes-AMG Petronas Motorsport



Read the full article here by Ars Technica

Facebook s'allie à Paypal pour les envois d'argent via Messenger

En voilà un partenariat peu commun ! Facebook, grand 35-tonnes des réseaux sociaux et Paypal, leader incontesté de paiement sécurisé sur le net, se sont alliés afin d'offrir la possibilité de payer directement par Messenger, l'application de chat du géant bleu (pas Intel cette fois-ci !). Avec un montant de 24 milliards de dollars échangés en 2017, une fraction suffirait au succès de cette alliance. En cas de problème, un bot est disponible afin de régler automatiquement la plupart des requêtes courantes (remise à zéro du mot de passe, litige, aides). À noter qu'il faudra néanmoins un compte Paypal (et bien évidemment un compte Facebook) pour bénéficier de cette fonctionnalité.

 

Pour le moment, cette dernière n'est par ailleurs disponible qu'au royaume pays de Trump ; mais nul doute qu'en l'absence de scandale, le vieux continent sera également atteint. Il faut dire que rassembler discussions et paiements au sein d'une même application, cela de quoi faciliter la vie des usagers... et des statisticiens analysant les causes/dépendances/autres caractéristiques de ces paiements ; ce qui n'est pas sans rappeler une certaine application chinoise.

 

Plus encore qu'auparavant, il faudra surveiller de près les l'utilisation de ce service (sans pour autant tomber dans la paranoïa) afin que nos règlements de comptes entre amis ne deviennent pas une mine d'or pour d'autres.

 

paybook logo

Le CDH offre ce logo 100% authentique pour la version française !



Read the full article here by Le comptoir du hardware

Sunday, 22 October 2017

Gran Turismo Sport review: A brilliant, but very new, direction for the series


reader comments 0

Game Details

Developer: Polyphony Digital
Publisher: Sony
Platform: PS4
Release Date: October 17th
Price: $59.99
Links: Official website 

Gran Turismo Sport is a great racing game. What it's not is a simple PS4 port of the last GT game. Almost everything about this latest release is different from every game that has come before it in the series. There are way fewer selectable cars than the competition (and previous GT games). There aren't many tracks. You won't spend hours buying new parts for your car or taking it for an oil change or a car wash. Gran Turismo Sport might not be the world’s most accurate driving simulation, but it’s fun—a lot of fun, particularly with a steering wheel. And refreshingly, it doesn't try to make you open your wallet to unlock anything.

But if racing against other humans online isn't something you care for, GTS is not the game for you. Unlike GT games of old, GTS is all about racing online, and maybe—just maybe—becoming a real racing driver at the end.

You can count on some game franchises to release new titles with metronomic regularity. Not so for the irregular and infrequent installments from Kazunori Yamauchi and his team Polyphony Digital. Deadlines were always a problem with the series, horribly exacerbated by the "nightmare" caused by the PS3 and its Cell processor. But now there's a new GT for the PS4, packed full of super-high definition and virtual reality. By my count, it's the 16th game (including the two-wheeled Tourist Trophy) in a series that dates back two decades with over 70 million copies sold.

Pretty much all of those games followed the same formula. You earn credits in races, then you spend those riches buying and tuning cars to win more races. Gran Turismo Sport bucks that trend in ways that make it quite clear why the developer didn't name the game Gran Turismo 7 (even if Yamauchi thinks that name would have worked).

Late to online, but early to e-sports?

The focus here is very much on the online experience, an emblematic example of an industry trend away from big-budget single-player games. The online focus is present to the extent that the game is quite limited when it can't speak to its servers. This has generated a lot of consternation in certain corners, and anyone looking for an updated version of the excellent GT6 is going to be disappointed. There are only 170-odd cars, and only one of those is a Mazda MX-5 Miata. The solo "Campaign" mode will probably keep you busy for a week, if that. And a lot of favorite tracks are missing—although the 'Ring is still present and correct.

Polyphony Digital was late to the whole idea of online multiplayer racing, only adding it in for 2008's GT5: Prologue. In some ways, GTS can be seen as making up for lost time.

But the series was way ahead of its time in taking the venture seriously enough to boost it from being a mere game to an e-sport. Back in 2008, and together with Nissan, Sony created the GT Academy, which has used the game to find promising talent to turn into actual racing drivers. GT Academy did a lot to legitimize racing games among the professional racing crowd, and e-sports features are now standard issue for any racing title released in 2017. (It's a must-have for actual racing series like Formula E and Formula 1.)

In GTS, that online competition is the core of the game. Polyphony Digital has even partnered with the Fédération Internationale de l'Automobile, or FIA, the body that organizes world motorsports and series like Formula 1. The two got together in 2014 with the idea that an FIA-approved game could earn players a "digital" FIA racing license that may translate to the real world. Racing licenses are required by sanctioning bodies to compete in their series, and they aren't cheap to obtain, which is a major roadblock for many aspiring racers. However, it's still a bit early to see how this "digital" version will pan out; we asked Sony if there is any fresh info but were told to expect news about licenses in the next few weeks.

Officially sanctioned online racing is the game's Sport mode. There are a number of regular Daily Races, which take place at 20-minute intervals throughout the day. Then there are two different FIA series and a Polyphony Digital Championship. Those three all get underway in early November, so our only experience is currently with the daily races, each of which is a 15-minute affair on a relatively short track suited to beginners. (Right now, that's the Brands Hatch Indy circuit, the East Course at Suzuka, and the fictional Northern Isle Speedway oval, although they will change periodically.)



Read the full article here by Ars Technica

Thursday, 19 October 2017

F1 su Netflix nel 2018: trattative in corso

Una partnership tra la Formula 1 e Netflix potrebbe concretizzarsi già in vista del 2018. La massima categoria dell’automobilismo potrebbe...

Read the full article here by FormulaPassion.it

AlphaGo Zero Goes From Rank Beginner to Grandmaster in Three Days—Without Any Help

In the 1970 sci-fi thriller Colossus: The Forbin Project, a computer designed to control the United States’ nuclear weapons is switched on, and immediately discovers the existence of a Soviet counterpart.

The two machines, Colossus and Guardian, trade equations, beginning with  “1 + 1 = 2.”  The math moves faster and faster, advancing through calculus and beyond until suddenly the blurry cascade of symbols stops. The two machines have become one, and it has mankind by the throat.

Hah, you say. Development work takes a lot longer than that.

Maybe not. Today DeepMind, a London-based subsidiary of Google, announced that it has developed a machine that plays the ancient Chinese game of Go much better than its predecessor, AlphaGo, which last year beat Lee Sedol, a world-class player, in Seoul.

The earlier program was trained for months on a massive database of master games and got plenty of pointers—training wheels, as it were—from its human creators. Then it improved further by playing countless games against itself. But the new one, called AlphaGo Zero, received no training wheels; it trained itself all the way from tyro to grandmaster.

In three days.

After a few more days of training, the new machine was pitted against the old one in games held at the standard tournament speed of two hours per player, per game. AlphaGo Zero won by 100 games to zero.

To understand the new system, we must first review last year’s version. It has three parts: a search algorithm, a Monte Carlo simulator, and two deep neural networks

Search algorithms dictate the moves within computer chess programs. The algorithm begins by listing every possible move, then every possible rejoinder, and so on, generating a tree of analysis. Next, it uses a second algorithm to evaluate the final position on each branch of the tree. Finally, it works its way back to select the move that leads to the best outcome should the opponent also play the best moves. Search is of only limitred value in Go, because it’s so hard to evaluate final positions, a problem explained in IEEE Spectrum 10 years ago by Feng Hsiung-Hsu, who programmed IBM’s Deep Blue, which defeated then-World Chess Champion Garry Kasparov.

Monte Carlo simulation instead generates great numbers of possible games to get an idea of how often a given move leads to good results. This is what financial planners do when they take the known statistical variance for stocks, bonds, and inflation and use it to generate far more alternative histories than the actual periods for which good records exist. Other Go programmers had already tried this method, with decent results, as described in Spectrum in 2014 by Jonathan Shaeffer, Martin Miller, and Akihiro Kishimoto.

Deep neural networks were applied to Go for the first time by DeepMind’s engineers, led by ​CEO Demis​ ​Hassabis​ ​and​ ​David​ ​Silver. On top of search and Monte Carlo, their original AlphaGo system used two networks, one trained to imitate the play of masters, as exemplified in a huge database of games, and another to evaluate positions. Then the program played millions of times against itself to improve beyond the level of mere human players.

DeepMind calls such self-training reinforced learning, and AlphaGo Zero relied on this technique alone. No search or Monte Carlo here. The machine played itself repeatedly, looking only at the board and at the black and white stones that take their places, move by move, at the intersections of 19 vertical and 19 horizontal lines. And it used one neural network rather than two. 

“After 72 hours, we evaluated AlphaGo Zero against the exact version... that defeated Lee Sedol, under the same 2-hour time controls and match conditions that were used in the man-machine match in Seoul,” write Silver, Hassabis and their co-authors today in Nature. “AlphaGo Zero used a single machine with 4 tensor processing units (TPUs), whereas AlphaGo Lee was distributed over many machines and used 48 TPUs. AlphaGo Zero defeated AlphaGo Lee by 100 games to 0.”

Don’t get the idea that this stuff is easy. The authors explain their work with a jungle of symbols reminiscent of the conversation between Guardian and Colossus. Here’s a sample:

“MCTS may be viewed as a self-play algorithm that, given neural
network parameters θ and a root position s, computes a vector of search
probabilities recommending moves to play, π = αθ(s), proportional to
the exponentiated visit count for each move, πa ∝ N(s, a)1/τ, where τ is
a temperature parameter.”

A chart shows the difference in power consumption between multiple generations of AlphaGo. Image: DeepMind    AlphaGo's power consumption has lowered with each generation.

I looked it up: “Temperature” is a concept derived from statistical mechanics

To a Go player, the result is a mixture of the familiar and the strange. In a commentary in Nature, Andy Okun and Andrew Jackson of the American Go Association write: “At each stage of the game, it seems to gain a bit here and lose a bit there, but somehow it ends up slightly ahead, as if by magic.”

And, the commentators add, the machine’s self-taught methods in the early and later parts of the game confirm the lore that grandmasters have accumulated over centuries of play. “But some of its middle-game judgements are truly mysterious.”

The DeepMind researchers discovered another bit of weirdness. When they had a neural network train itself to predict the moves of expert players it did very well, though it took a bit longer to reach the standard of a system trained with human supervision. However, the self-trained network played better overall, suggesting “that AlphaGo Zero may be learning a strategy that is qualitatively different to human play.”

Different and better. Toward the end of Colossus: The Forbin Project, the computer says, “What I am began in Man's mind, but I have progressed further than Man.”

But before we welcome our new overlords, a splash of cold water may be in order. And there is no better supplier of such coolant for AI hype than Rodney Brooks, who recently wrote for Spectrum on the limitations of self-driving cars.

In his keynote at the IEEE TechEthics Conference, held on Friday in Washington, D.C., Brooks said he’d asked the creators of the original AlphaGo how it would have fared against Lee Sedol if, at the last minute, the board had been enlarged by 10 lines, to 29 x 29. They told him the machine couldn’t have managed even if the board had been shrunk by a single line, to 18 x 18. That’s how specialized these deep neural networks are.

Brooks showed the audience a photo that Google’s AI system had labeled as a scene of people playing frisbee. “If a person had done this,” Brooks said, “we’d assume that he’d know a lot of other things about frisbees—that he could tell us whether a three-month-old can play frisbee, or whether you can eat a frisbee. Google’s AI can’t!”

AlphaGo Zero can’t tell us that Go is harder than checkers, that it involves occupying territory rather than checkmating the opponent’s king, or that it is a game. It can’t tell us anything at all. 

But no human can stand against it.



Read the full article here by Computing: IEEE Spectrum

Wednesday, 18 October 2017

Research Risks

The 1919 Great Boston Molasses Flood remained the deadliest confectionery containment accident until the Canadian Space Agency's 2031 orbital maple syrup delivery disaster.

Read the full article here by xkcd.com

Intel proposes a mathematical formula for self-driving car safety

As autonomous vehicles become a part of the landscape, there are more questions than ever about their safety, and how to determine responsibility when they get in an accident. With so many companies (Alphabet, Uber, GM, Tesla and Ford -- just to name a few) working on different technology, there's also a question of how to establish standards on any level. Now, Amnon Shashua, the CEO of (recently acquired by Intel) Mobileye is proposing a model called Responsibility Sensitive Safety to "prove" the safety of autonomous vehicles.

Safe Distance Formula

In practice, the AV needs to know two things:

  • Safe State: This is a state where there is no risk that the AV will cause an accident, even if other vehicles take unpredictable or reckless actions.
  • Default Emergency Policy: This is a concept that defines the most aggressive evasive action that an AV can take to maintain or return to a Safe State.

We coin the term Cautious Command to represent the complete set of commands that maintains a Safe State. RSS sets a hard rule that the AV will never make a command outside of the set of Cautious Commands. This ensures that the planning module itself will never cause an accident.

The paper (PDF) backing this model up tries to establish some baseline equations (the setup pictured above is just one of the situations) for self-driving cars to make sure they will behave safely. That includes in situations where it's possible for pedestrians to appear from behind another car, or, for example, making sure they're not following another car too closely. The point is to make sure that autonomous vehicles don't cause collisions, even if it may not prevent all collisions, and be able to establish blame in a crash that happens without a human driver.

Source: Intel, Arxiv (PDF)



Read the full article here by Engadget

Tuesday, 17 October 2017

Alibaba and Red Hat Partner to Take RHEL-as-a-Service to China's Largest Cloud

Alibaba and Red Hat Partner to Take RHEL-as-a-Service to China's Largest Cloud

Oct 17, 2017, 11:00 (0 Talkback[s]) (Other stories by Christine Hall)

WEBINAR: On-demand Event

Replace Oracle with the NoSQL Engagement Database: Why and how leading companies are making the switch REGISTER >

Although Red Hat in the cloud is already available in China, the new arrangement is important because it makes the company's software portfolio available on the largest cloud in the largest of the emerging markets. This benefits existing customers with expansion plans that include what is now the world's second largest economy. It also promises to generate revenue from inside the country.

Complete Story



Read the full article here by Linux Today

Pixel 2 Has Google's First In-House Chipset: Pixel Visual Core


The Pixel 2 and Pixel 2 XL had a secret Image Processing Unit (IPU) that none of us knew about until now: Google has just announced that both phones contain their first custom-designed System on Chip (SoC), called Pixel Visual Core. In the coming weeks and months, Google will enable the Pixel Visual Core to help bring the Pixel camera’s HDR+ magic to third party camera apps.

The goal here is to bring the Pixel 2’s HDR+ smarts to others outside of the Google Camera app. The SoC will do so by taking advantage of its eight Google-designed custom cores that can deliver "over 3 trillion operations per second on a mobile power budget." By using the Pixel Visual Core, Google says that HDR+ can run 5x faster and at less than 1/10th the energy than if it was trying to through the application processor (like the Snapdragon 835 in these phones).

Discussion



Read the full article here by [H]ardOCP News/Article Feed

Gif: falling...

falling.....(Read...)



Read the full article here by Likecool

Keyboard latency

If you look at “gaming” keyboards, a lot of them sell for $100 or more on the promise that they’re fast. Ad copy that you’ll see includes:

  • a custom designed keycap that has been made shorter to reduce the time it takes for your actions to register
  • 8x FASTER - Polling Rate of 1000Hz: Response time 0.1 milliseconds
  • Wield the ultimate performance advantage over your opponents with light operation 45g key switches and an actuation 40% faster than standard Cherry MX Red switches
  • World’s Fastest Ultra Polling 1000Hz
  • World’s Fastest Gaming Keyboard, 1000Hz Polling Rate, 0.001 Second Response Time

Despite all of these claims, I can only find one person who’s publicly benchmarked keyboard latency and they only tested two keyboards. In general, my belief is that if someone makes performance claims without benchmarks, the claims probably aren’t true, just like how code that isn’t tested (or otherwise verified) should be assumed broken.

The situation with gaming keyboards reminds me a lot of talking to car salesmen:

Salesman: this car is super safe! It has 12 airbags! Me: that’s nice, but how does it fare in crash tests? Salesman: 12 airbags!

Sure, gaming keyboards have 1000Hz polling, but so what?

Two obvious questions are:

  1. Does keyboard latency matter?
  2. Are gaming keyboards actually quicker than other keyboards?

Does keyboard latency matter?

A year ago, if you’d asked me if I was going to build a custom setup to measure keyboard latency, I would have said that’s silly, and yet here I am, measuring keyboard latency with a logic analyzer.

It all started because I had this feeling that some old computers feel much more responsive than modern machines. For example, an iMac G4 running macOS 9 or an Apple 2 both feel quicker than my 4.2 GHz Kaby Lake system. I never trust feelings like this because there’s decades of research showing that users often have feelings that are the literal opposite of reality, so got a high-speed camera and started measuring actual keypress-to-screen-update latency as well as mouse-move-to-screen-update latency. It turns out the machines that feel quick are actually quick, much quicker than my modern computer – computers from the 70s and 80s commonly have keypress-to-screen-update latencies in the 30ms to 50ms range out of the box, whereas modern computers are often in the 100ms to 200ms range when you press a key in a terminal. It’s possible to get down to the 50ms range in well optimized games with a fancy gaming setup, and there’s one particularly impressive consumer device that can easily get below that. But in general, the default experience is much slower. Modern computers have much better throughput, but their latency isn’t so great.

Anyway, at the time I did these measurements, my 4.2 GHz kaby lake had the fastest single-threaded performance of any machine you could buy but had worse latency than a quick machine from the 70s (roughly 6x worse than an Apple 2), which seems a bit curious. To figure out where the latency comes from, I started measuring keyboard latency because that’s the first part of the pipeline. My plan was to look at the end-to-end pipeline and start at the beginning, ruling out keyboard latency as a real source of latency. But it turns out keyboard latency is significant! I was surprised to find that the median keyboard I tested has more latency than the entire end-to-end pipeline of the Apple 2. If this doesn’t immedately strike you as absurd, consider that an Apple 2 has 3500 transistors running at 1MHz and an Atmel employee estimates that the core used in a number of high-end keyboards today has 80k transistors running at 16MHz. That’s 20x the transistors running at 16x the clock speed – keyboards are often more powerful than entire computers from the 70s and 80s! And yet, the median keyboard today adds as much latency as the entire end-to-end pipeline as a fast machine from the 70s.

Let’s look at the measured keypress-to-USB latency on some keyboards:

keyboard latency
(ms)
connection gaming
apple magic (usb) 15 USB FS
MS natural 4000 20 USB
das 3 25 USB
logitech k120 30 USB
filco majestouch 30 USB
dell OEM 30 USB
powerspec OEM 30 USB
razer ornata chroma 35 USB FS Yes
olkb planck rev 4 40 USB FS
ergodox 40 USB FS
MS comfort 5000 40 wireless
easterntimes i500 50 USB FS Yes
kinesis advantage 50 USB FS
genius luxemate i200 55 USB
topre type heaven 55 USB FS
logitech k360 60 “unifying”

The latency measurements are the time from when the key starts moving to the time when the USB packet associated with the key makes it out onto the USB bus. Numbers are rounded to the nearest 5 ms in order to avoid giving a false sense of precision. The easterntimes i500 is also sold as the tomoko MMC023.

The connection column indicates the connection used. USB FS stands for the usb full speed protocol, which allows up to 1000Hz polling, a feature commonly advertised by high-end keyboards. USB is the usb low speed protocol, which is the protocol most keyboards use. The ‘gaming’ column indicates whether or not the keyboard is branded as a gaming keyboard. wireless indicates some kind of keyboard-specific dongle and unifying is logitech’s wireless device standard.

We can see that, even with the limited set of keyboards tested, there can be as much as a 45ms difference in latency between keyboards. Moreover, a modern computer with one of the slower keyboards attached can’t possibly be as responsive as a quick machine from the 70s or 80s because the keyboard alone is slower than the entire response pipeline of some older computers.

That establishes the fact that modern keyboards contribute to the latency bloat we’ve seen over the past forty years. The other half of the question is, does the latency added by a modern keyboard actually make a difference to users? From looking at the table, we can see that among the keyboard tested, we can get up to a 40ms difference in average latency. Is 40ms of latency noticeable? Let’s take a look at some latency measurements for keyboards and then look at the empirical research on how much latency users notice.

There’s a fair amount of empirical evidence on this and we can see that, for very simple tasks, people can percieve latencies down to 2ms or less. Moreover, increasing latency is not only noticable to users, it causes users to execute simple tasks less accurately. If you want a visual demonstration of what latency looks like and you don’t have a super-fast old computer lying around, check out this MSR demo on touchscreen latency.

Are gaming keyboards faster than other keyboards?

I’d really like to test more keyboards before making a strong claim, but from the preliminary tests here, it appears that gaming keyboards aren’t generally faster than non-gaming keyboards.

Gaming keyboards often claim to have features that reduce latency, like connecting over USB FS and using 1000Hz polling. The USB low speed spec states that the minimum time between packets is 10ms, or 100 Hz. However, it’s common to see USB devices round this down to the nearest power of two and run at 8ms, or 125Hz. With 8ms polling, the average latency added from having to wait until the next polling interval is 4ms. With 1ms polling, the average latency from USB polling is 0.5ms, giving us a 3.5ms delta. While that might be a significant contribution to latency for a quick keyboard like the Apple magic keyboard, it’s clear that other factors dominate keyboard latency for most keyboards and that the gaming keyboards tested here are so slow that shaving off 3.5ms won’t save them.

Conclusion

Most keyboards add enough latency to make the user experience noticeably worse, and keyboards that advertise speed aren’t necessarily faster. The two gaming keyboards we measured weren’t faster than non-gaming keyboards, and the fastest keyboard measured was a minimalist keyboard from Apple that’s marketed more on design than speed.

Previously, we’ve seen that terminals can add significant latency, up 100ms in mildly pessimistic conditions. In a future post, we’ll look at the entire end-to-end pipeline to see other places latency has crept in and we’ll also look at how some modern devices keep latency down.

Appendix: where is the latency coming from?

A major source of latency is key travel time. It’s not a coincidence that the quickest keyboard measured also has the shortest key travel distance by a large margin. video setup I’m using to measure end-to-end latency is a 240 fps camera, which means that frames are 4ms apart. When videoing “normal” keypresses and typing, it takes 4-8 frames for a key to become fully depressed. Most switches will start firing before the key is fully depressed, but the key travel time is still significant and can easily add 10ms of delay (or more, depending on the switch mechanism). Contrast this to the Apple keyboard measured, where the key travel is so short that it can’t be captured with a 240 fps camera, indicating that the key travel time is < 4ms.

Note that, unlike the other measurement I was able to find online, this measurement was from the start of the keypress instead of the switch activation. This is because, as a human, you don’t activate the switch, you press the key. A measurement that starts from switch activiation time misses this large component to latency. If, for example, you’re playing a game and you switch from moving forward to moving backwards when you see something happen, you have pay the cost of the key movement, which is different for different keyboards. A common response to this is that “real” gamers will preload keys so that they don’t have to pay the key travel cost, but if you go around with a high speed camera and look at how people actually use their keyboards, the fraction of keypresses that are significantly preloaded is basically zero even when you look at gamers. It’s possible you’d see something different if you look at high-level competitive gamers, but even then, just for example, people who use a standard wasd or esdf layout will typically not preload a key when going from back to forward. Also, the idea that it’s fine that keys have a bunch of useless travel because you can pre-depress the key before really pressing the key is just absurd. That’s like saying latency on modern computers is fine because some people build gaming boxes that, when run with unusually well optimzed software, get 50ms response time. Normal, non-hardcore-gaming users simply aren’t going to do this. Since that’s the vast majority of the market, even if all “serious” gamers did this, that would stll be a round error.

The other large sources of latency are scaning the keyboard matrix and debouncing. Neither of these delays are inherent – keyboards use a matrix that has to be scanned instead of having a wire per-key because it saves a few bucks, and most keyboards scan the matrix at such a slow rate that it induces human noticable delays because that saves a few bucks, but a manufacturer willing to spend a bit more on manufacturing a keyboard could make the delay from that far below the threshold of human perception. See below for debouncing delay.

Appendix: counter-arguments to common arguments that latency doesn’t matter

Before writing this up, I read what I could find about latency and it was hard to find non-specialist articles or comment sections that didn’t have at least one of the arguments listed below:

Computers and devices are fast

The most common response to questions about latency is that input latency is basically zero, or so close to zero that it’s a rounding error. For example, two of the top comments on this slashdot post asking about keyboard latency are that keyboards are so fast that keyboard speed doesn’t matter. One person even says

There is not a single modern keyboard that has 50ms latency. You (humans) have that sort of latency.

As far as response times, all you need to do is increase the poll time on the USB stack

As we’ve seen, some devices do have latencies in the 50ms range. This quote as well as other comments in the thread illustrate another common fallacy – that input devices are limited by the speed of the USB polling. While that’s technically possible, most devices are nowhere near being fast enough to be limited by USB polling latency.

Unfortunately, most online explanations of input latency assume that the USB bus is the limiting factor.

Humans can’t notice 100ms or 200ms latency

Here’s a “cognitive neuroscientist who studies visual perception and cognition” who refers to the fact that human reaction time is roughly 200ms, and then throws in a bunch more scientific mumbo jumbo to say that no one could really notice latencies below 100ms. This is a little unusual in that the commenter claims some kind of special authority and uses a lot of terminology, but it’s common to hear people claim that you can’t notice 50ms or 100ms of latency because human reaction time is 200ms. This doesn’t actually make sense because there are independent quantities. This line of argument is like saying that you wouldn’t notice a flight being delayed by an hour because the duration of the flight is six hours.

Another problem with this line of reasoning is that the full pipeline from keypress to screen update is quite long and if you say that it’s always fine to add 10ms here and 10ms there, you end up with a much larger amount of bloat through the entire pipeline, which is how we got where we are today, where can buy a system with the CPU that gives you the fastest single-threaded performance money can buy and get 6x the latency of a machine from the 70s.

It doesn’t matter because the game loop runs at 60 Hz

This is fundamentally the same fallacy as above. If you have a delay that’s half the duration a clock period, there’s a 50% chance the delay will push the event into the next processing step. That’s better than a 100% chance, but it’s not clear to me why people think that you’d need a delay as long as the the clock period for the delay to matter. And for reference, the 45ms delta between the slowest and fastest keyboard measured here corresponds to 2.7 frames at 60fps.

Keyboards can’t possibly response faster more quickly than 5ms/10ms/20ms due to debouncing

Even without going through contortions to optimize the switch mechanism, if you’re willing to put hysteresis into the system, there’s no reason that the keyboard can’t assume a keypress (or release) is happening the moment it sees an edge. This is commonly done for other types of systems and AFAICT there’s no reason keyboards couldn’t do the same thing (and perhaps some do). The debounce time might limit the repeat rate of the key, but there’s no inherent reason that it has to affect the latency. And if we’re looking at the repeat rate, imagine we have a 5ms limit on the rate of change of the key state due to introducing hysteresis. That gives us one full keypress cycle (press and release) every 10ms, or 100 keypresses per second per key, which is well beyond the capacity of any human. You might argue that this introduces a kind of imprecision, which might matter in some applications (music, rythym games), but that’s limited by the switch mechanism. Using a debouncing mechanism with hysteresis doesn’t make us any worse off than we were before.

Apologies for not explaining terminology here, but I think that anyone making this objection should understand the explanation :-).

Appendix: experimental setup

The USB measurement setup was a USB cable that was cut open to expose the wires, connected to a logic analyzer. The exact model of logic analzyer doesn’t really matter, but if you’re curious about the details, this set of experiments used a salae pro.

The start-of-input was measured by pressing two keys at once – one key on the keyboard and a button that was also connected to the logic analyzer. This introduces some jitter as the two buttons won’t be pressed at exactly the same time. To calibrate the setup, we used two identical buttons connected to the logic analyzer. The median jitter was < 1ms and the 90%-ile jitter was roughly 5ms. This is enough that tail latency measurements for quick keyboards aren’t really possible with this setup, but average latency measurements like the ones done here seem like they should be ok. The input jitter could probably be reduced to a negligible level by building a device to both trigger the logic analyzer and press a key on the keyboard under test at the same time. Average latency measurements would also get better with such a setup (because it would be easier to run a large number of measurements).

If you want to know the exact setup, a E-switch LL1105AF065Q switch was used. Power and ground were supplied by an arduino board. There’s no particular reason to use this setup. In fact, it’s a bit absurd to use an entire arduino to provide power, but this was done with spare parts that were lying around and this stuff just happened to be stuff that RC had in their lab, with the exception of the switches. There weren’t two identical copies of any switch, so we bought a few switches so we could do calibration measurements with two identical switches. The exact type of switch isn’t important here; any low-resistance switch would do.

Tests were done by pressing the z key and then looking for byte 29 on the USB bus and then marking the end of the first packet containing the appropriate information. But, as above, any key would do.

I would love to test more keyboards! If you’re in the NYC area and have a keyboard that I can borrow for a test, please contact me! I’m also looking for more phones and machines to measure and would love to run a quick benchmark with a high-speed camera if you have a machine or device that’s not on this list! If you’re not in the area and want to donate a keyboard for me to test, feel free to mail the keyboard to

Dan Luu
Recurse Center
455 Broadway, 2nd Floor
New York, NY 10013

The setup I currently have can only get measurements from devices that present themselves as standard USB keyboards devices, but I might try to decode keyboards that use other protocols over USB and I might also try to build a setup that can handle ps/2 sometime.

This project was done with help from Wesley Aptekar-Cassels, Leah Hanson, and Kate Murphy. BTW, Wesley is looking for work. In addition to knowing “normal” programming stuff, he’s also familiar with robotics, controls, electronics, and general “low-level” programming.

Thanks to RC, Ahmad Jarara, Raph Levien, Peter Bhat Harkins, Brennan Chesley, Dan Bentley, Kate Murphy, Christian Ternus, and Dan Puttick, for letting us use their keyboards for testing.

Thanks for Leah Hanson, Mark Feeney, and Zach Allaun for comments/corrections/discussion on this post.



Read the full article here by Dan Luu