Sunday, 4 December 2016

Google has developed Deep Learning Algorithm to detect Diabetic Eye Disease

New advances in Deep Learning help solving many important medical and healthcare problems.

Read the full article here by DailyTech Main News Feed

VM-Neutral Node.js API Unveiled, As NodeSource Collaborates With Microsoft, Mozilla, Intel and IBM

An anonymous reader writes: This week saw the first proof of concept for Node.js API (or NAPI for short), "making module maintainers' lives easier by defining a stable module API that is independent from changes in [Google's JavaScript engine] V8 and allowing modules to run against newer versions of Node.js without recompilation." Their announcement cites both the efforts of the Node.js API working group and of ChakraCore, the core part of the Chakra Javascript engine that powers Microsoft Edge. And there was also a second announcement -- that the Node.js build system "will start producing nightly node-chakracore builds, enabling Node.js to be used with the ChakraCore JavaScript engine. "These initial efforts are stepping stones to make Node.js VM-neutral, which would allow more opportunities for Node.js in IoT and mobile use cases as well as a variety of different systems." One IBM runtime developer called it "a concrete step toward the strategic end goal of VM neutrality," and the Node.js Foundation believes that the API will ultimately result in "more modules to choose from, and more stability with modules without the need to continually upgrade."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Saturday, 3 December 2016

Amazon's new Snowmobile collects data instead of blasting through powder

Moving large amounts of electronically stored data is both difficult and time consuming. Current internet technology limits how quickly data can be transferred using the internet. While a few pictures on your phone can be uploaded in seconds, backing up all the data for an entire corporation can take years. This is a problem Amazon Web Services, the cloud-based computing arm of the online retailer, is currently dealing with. They now have a solution: the Snowmobile, literally a massive mobile server.
The Snowmobile is an expansion of Amazon's Snowball service. Amazon ships an 80 terabyte Snowball storage unit to customers. Once they uploaded their data, the customer would ship the Snowball back to Amazon so the data could be directly transferred to a server. Amazon designed the Snowball to deal with high network costs, long transfer times, and security concerns at come with moving large amounts of data over the internet.

For some customers, the Snowball simply didn't provide enough space. This led to the development of the Snowmobile, a semi-truck with 100 petabytes (or 100 million gigabytes) of storage that Amazon will use to directly transfer and move digital data from a customer to one of their data storage facilities. Essentially, they send customers a truck-sized hard drive that will physically move all of your stored data. Although it may sound strange to use a truck to move digital data, moving 100 petabytes over fiber internet could take more than 20 years. According to Wired, a Snowmobile can do a full New York to Los Angeles trip in 10 days, including the data transfer time.

Pricing is based on how much data is stored on the Snowmobile per month. Amazon is charging $0.005 per gigabyte, which is relatively inexpensive. Do the math and you'll find that a fully-loaded truck will set you back $500,000. Amazon obviously believes there will be enough demand to invest in the Snowmobile.

Amazon Web Services has storage facilities worldwide, with several scattered throughout the United States. This means the turnaround time for a Snowmobile can be days, not years. For Amazon's part, they're taking Snowmobile security and encryption very seriously. The company will provide dedicated security personnel, GPS tracking, alarm monitoring, and 24/7 video surveillance. Customers have the option of a security escort vehicle to accompany the Snowmobile while in transit. The truck itself will be tamper-resistant, water-resistant and temperature controlled.

Related Video:

Via: Wired

Source: Amazon



Read the full article here by Engadget

Death Stranding: Playing Tricks With Players

Hideo Kojima had a very busy Game Awards 2016 ceremony. He got an award as a game development icon and also shared the next trailer of his mysterious game Death Stranding. As with the debut video, you can’t really figure out what’s going on here, but we do see some interesting stuff there. But first, have a look for yourself. My god, this is something.

The opening shot shows a bunch of dead crabs, all female and swollen with eggs. It’s kind of understandable that we are standing somewhere near the shoreline. However, as the shot progresses we see a man walk right out of the imaginary see. Before him, we see a dark tunnel and a ruined city.

2016-12-02_11-48-59

2016-12-02_11-59-47

death-stranding-guillermo-del-toro

Guillermo del To is a big friend of Hideo Kojima.

The man turns around to watch the planes flying above him and we see that it’s Guillermo del Toro – director of some of the spookiest feature films. Attentive users on Reddit noticed that he actually has a scar on his forehead. Seems like someone was doing smth with his brain. He’s wearing a suit and some unrecognizable badge with the name Bridges and a web spreading along US (it actually says United Cities of America).

1466047981-death-stranding-2

DelToro is actually not mentioned in the credits of the trailer. Instead, Norman Reedus is stated there. This led some fans to believe that this trailer takes place in the alternative universe, where DelToro is Reedus. Whooooah.

1466197148-death-stranding-norma-reedus-mocap-session-1

1466197148-death-stranding-norma-reedus-mocap-session-2

The man hides inside the dark passage under the bridge and then some strange stuff starts to happen. We see a huge tank (M36 Tank Destroyer, an American tank destroyer used in 1944 and 1945), sort of infested with some strange organic substance. The tank looks like it was made by zombies. There are pieces of gore on its armore plates, there are bones, which substitute ladders.

2016-12-02_12-02-58

2016-12-02_12-03-22

Then we see zombies, who were US marines in the past. At least that’s my theory. They are marching somewhere, while the black goo is pouring down the cracks of the bridge. DelToro is obviously scared.

2016-12-02_12-05-01

The water is rising and he suddenly plugs a hose into the small container he’s carrying in his hands. Inside the container we see a child. Then we see a character of Mads Mikkelsen.

2016-12-02_12-19-33

Mads Mikkelsen is like the perfect bad guy.

death-stranding-mads-mikkelsen

Design-wise it’s absolute genius. You can’t figure out what’s going on, but there’s already tons of comments on Reddit and it’s really interesting to check out all those theories. What’s more interesting to us is that Death Stranding’s technical producer is Mark Cerny, who served as the architect of PlayStation 4 and the director of Knack (although we do wish to forget about this game). This means that Kojima has access to some of the most powerful technologies of Sony Computer Entertainment.

2016-12-02_12-18-56

Yesterday Michael Förtsch (a journalist from Germany, who works with Zeit, Spiegel and other credible papers) claimed, that he knows the name of the Death Stranding’s engine. It’s supposedly called Decima! The engine itself is a modified version of the technology used to build Infamous Second Son by Sucker Punch. Second Son is one of the most beautiful games on the platform, so we’re pretty sure that Kojima’s next game is going to have a lot of open spaces, urban landscapes and amazing action and nanomachines. This is just our forecast. After all, just like John Snow, we know absolutely nothing.

1466047981-death-stranding

1466047981-death-stranding-2

Death Stranding will be available for PS4 and PC.

 


© kikundra for 80lvl, 2016. | Permalink | No comment | Add to del.icio.us
Post tags: , , , , , , , , , , ,

Feed enhanced by Better Feed from Ozh



Read the full article here by 80lvl

Abacus Drive is a Speed-to-Torque Game-Changer

Thursday, 1 December 2016

WD introduces a range of affordable storage devices designed for Raspberry Pi

The Raspberry Pi has been an incredible success. Originally aimed at teaching kids to code, it has found an appreciative audience outside of the education system. You can do lots with a Pi, but some projects require more storage than you might have on your SD/microSD card. WD has introduced a new series of storage devices designed specifically for use with the Raspberry Pi, which combine microSD card and USB drive functionality with operating system installation software. SEE ALSO: Turn your Raspberry Pi into a 3D Christmas tree with this new board The WD PiDrive Foundation Edition drives include a… [Continue Reading]


Read the full article here by Betanews

Running Intel TBB On a Raspberry Pi

The usefulness of Raspberry Pis seems almost limitless, with new applications being introduced daily and with no end in sight. But, as versatile as they are, it’s no secret that Raspberry Pis are still lacking in pure processing power. So, some serious optimization is needed to squeeze as much power out of the Raspberry Pi as possible when you’re working on processor-intensive projects.

This simplest way to accomplish this optimization, of course, is to simply reduce what’s running down to the essentials. For example, there’s no sense in running a GUI if your project doesn’t even use a display. Another strategy, however, is to ensure that you’re actually using all of the available processing power that the Raspberry Pi offers. In [sagiz’s] case, that meant using Intel’s open source Threading Building Blocks to achieve better parallelism in his OpenCV project.

As you’re probably guessing, this wasn’t as easy as just typing “apt-get install tbb” into the terminal. That’s because Intel TBB wasn’t available in Raspbian, due to the difficulty of creating a build to run on ARM. But, [sagiz] was able to create a working build, and has made it available on his project page. Using his new build, he was able to increase OpenCV speed by 30%, which is definitely a non-trivial amount!

If you’re looking to get started with OpenCV on the Raspberry Pi, be sure to check out this guide. which will get you of to a grand start.


Filed under: ARM, Raspberry Pi

Read the full article here by Hack a Day

Geohot’s comma.ai Self-Driving Code On GitHub

Wednesday, 30 November 2016

Docker + Dataflow = happier workflows

When I first saw the Google Cloud Dataflow monitoring UI -- with its visual flow execution graph that updates as your job runs, and convenient links to the log messages -- the idea came to me. What if I could take that UI, and use it for something it was never built for? Could it be connected with open source projects aimed at promoting reproducible scientific analysis, like Common Workflow Language (CWL) or Workflow Definition Language (WDL)?
Screenshot of a Dockerflow workflow for DNA sequence analysis.

In scientific computing, it’s really common to submit jobs to a local high-performance computing (HPC) cluster. There are tools to do that in the cloud, like Elasticluster and Starcluster. They replicate the local way of doing things, which means they require a bunch of infrastructure setup and management that the university IT department would otherwise do. Even after you’re set up, you still have to ssh into the cluster to do anything. And then there are a million different choices for workflow managers, each unsatisfactory in its own special way.

By day, I’m a product manager. I hadn’t done any serious coding in a few years. But I figured it shouldn’t be that hard to create a proof-of-concept, just to show that the Apache Beam API that Dataflow implements can be used for running scientific workflows. Now, Dataflow was created for a different purpose, namely, to support scalable data-parallel processing, like transforming giant data sets, or computing summary statistics, or indexing web pages. To use Dataflow for scientific workflows would require wrapping up shell steps that launch VMs, run some code, and shuttle data back and forth from an object store. It should be easy, right?

It wasn’t so bad. Over the weekend, I downloaded the Dataflow SDK, ran the wordcount examples, and started modifying. I had a “Hello, world” proof-of-concept in a day.

To really run scientific workflows would require more, of course. Varying VM shapes, a way to pass parameters from one step to the next, graph definition, scattering and gathering, retries. So I shifted into prototyping mode.

I created a new GitHub project called Dockerflow. With Dockerflow, workflows can be defined in YAML files. They can also be written in pretty compact Java code. You can run a batch of workflows at once by providing a CSV file with one row per workflow to define the parameters.

Dataflow and Docker complement each other nicely:

  • Dataflow provides a fully managed service with a nice monitoring interface, retries,  graph optimization and other niceties.
  • Docker provides portability of the tools themselves, and there's a large library of packaged tools already available as Docker images.

While Dockerflow supports a simple YAML workflow definition, a similar approach could be taken to implement a runner for one of the open standards like CWL or WDL.

To get a sense of working with Dockerflow, here’s “Hello, World” written in YAML:

defn:
name: HelloWorkflow
steps:
- defn:
name: Hello
inputParameters:
name: message
defaultValue: Hello, World!
docker:
imageName: ubuntu
cmd: echo $message

And here’s the same example written in Java:

public class HelloWorkflow implements WorkflowDefn {
@Override
public Workflow createWorkflow(String[] args) throws IOException {
Task hello =
TaskBuilder.named("Hello").input("message", “Hello, World!”).docker(“ubuntu”).script("echo $message").build();
return TaskBuilder.named("HelloWorkflow").steps(hello).args(args).build();
}
}

Dockerflow is just a prototype at this stage, though it can run real workflows and includes many nice features, like dry runs, resuming failed runs from mid-workflow, and, of course, the nice UI. It uses Cloud Dataflow in a way that was never intended -- to run scientific batch workflows rather than large-scale data-parallel workloads. I wish I’d written it in Python rather than Java. The Dataflow Python SDK wasn’t quite as mature when I started.

Which is all to say, it’s been a great 20% project, and the future really depends on whether it solves a problem people have, and if others are interested in improving on it. We welcome your contributions and comments! How do you run and monitor scientific workflows today?

By Jonathan Bingham, Google Genomics and Verily Life Sciences


Read the full article here by Google Open Source Blog

Tuesday, 29 November 2016

Decades after Chernobyl disaster, engineers slide high-tech shelter over reactor

You can now play Pac-Man and more arcade faves in Facebook Messenger

Another way for social media to distract you

Pac-Man, Space Invaders, Galaga and other arcade classics are now playable on another platform: Facebook. The social media giant has introduced Instant Games, a new gaming feature, to Messenger and the news feed, making popular titles easily accessible while visiting Facebook.

Instant Games is now in closed beta, with a full launch date not yet specified. Players can launch the feature by tapping an icon in the messaging field, as well as while scrolling through the news feed. The feature will work on mobile and web, and will include social features without any added installations. Watch the how-to video above for a sense of how Instant Games will work once it’s live for everyone.

The full list of 17 games included in the Instant Games launch library features those familiar to heavy Facebook users, like Shuffle Cats and Words with Friends. More exciting are the old-school favorites, which run the gamut from Bandai Namco’s Pac-Man, to Konami’s Track & Field 100M, to Taito Corporation’s Puzzle Bobble. Here’s what’s playable starting today:

  • PAC-MAN (BANDAI NAMCO Entertainment Inc.)
  • Galaga (BANDAI NAMCO Entertainment Inc.)
  • ARKANOID (TAITO CORPORATION)
  • SPACE INVADERS (TAITO CORPORATION)
  • TRACK & FIELD 100M (Konami Digital Entertainment Co., Ltd.)
  • Words with Friends: Frenzy (Zynga)
  • Shuffle Cats Mini (King)
  • EverWing (Blackstorm)
  • Hex (FRVR)
  • Endless Lake (Spilgames)
  • Templar 2048 (Vonvon)
  • The Tribez: Puzzle Rush (Game Insight)
  • 2020 Connect (Softgames)
  • Puzzle Bobble (TAITO CORPORATION / Blackstorm)
  • Zookeeper (Kiteretsu)
  • Brick Pop (Gamee)
  • Wordalot Express (MAG Interactive)

Facebook also has an array of games available as separate apps within the platform. While many of these remain popular, the Facebook game craze reached an apex in the late-00s, thanks to free-to-play titles like FarmVille.

In 2013, Dan Morris, Facebook’s head of gaming partnerships, told us that the social media company was looking into ways to keep players gaming on the site as downloadable, off-platform mobile games grew in popularity. Mobile gaming is now a huge industry, and Instant Games seems to be the realization of Facebook’s years of trying to break its way inside.



Read the full article here by Polygon

Sunday, 27 November 2016

F1 | Ricciardo: “Frustrato per la strategia”

Solo un quinto posto nell’ultima gara dell’anno per Daniel Ricciardo che così non vede esaudite le sue speranze di lottare...

Read the full article here by FormulaPassion.it

Saturday, 26 November 2016

Unix history repository, now on GitHub

Icon

The history and evolution of the Unix operating system is made available as a revision management repository, covering the period from its inception in 1970 as a 2.5 thousand line kernel and 26 commands, to 2016 as a widely-used 27 million line system. The 1.1GB repository contains about half a million commits and more than two thousand merges. The repository employs Git system for its storage and is hosted on GitHub. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, the University of California at Berkeley, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, about one thousand individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology.

The project aims to put in the repository as much metadata as possible, allowing the automated analysis of Unix history.



Read the full article here by OSNews

Slow Down to Go Faster

Thursday, 24 November 2016

Unity details Xiaomi partnership

Unity details Xiaomi partnership

Unity has offered up further information on its deal with Chinese tech outfit Xiaomi that it says can offer developers "unparalleled access and distribution" to China's mobile market. Starting next year, developers will be able to use Unity to publish their games to Xiaomi's app store.

The Chinese firm will also be on hand to offer developers help when negotiating local licensing terms, a process which is viewed as a major obstacle for games companies based outside of China that are trying to break into the country.

The Xiaomi app store is a profitable enough target for developers though, it's had 50 billion apps downloaded from it to date, making access an attractive proposal for many developers, although Unity haven't given an exact timeframe beyond "2017" for exports to Xiaomi to kick off.

Unity will also allow Developers to enable Unity Ads in their title, notable as the first time a third-party ad network has been allowed on any Chinese Android app stores. Developers will also be able to port their IAP's to the app store with Unity IAP.

“A large part of ensuring developer success is providing access to the platforms that matter most,” said Andrew Tang, China Country Manager at Unity Technologies.

“By partnering with Xiaomi we can help Unity developers bring their creative vision to a massive audience and accelerate the progress of quality entertainment worldwide.”

 

 
 


Read the full article here by Develop Feed

x86 emulation rumored to be coming to Windows for ARM in late 2017

Microsoft is working on an emulator enabling systems with ARM processors to run x86 applications, according to sources speaking to Mary Jo Foley, and the capability will ship in the update codenamed "Redstone 3," currently due for fall 2017. This will be the third Minecraft-inspired Redstone codename; this year's Anniversary Update was Redstone 1, and the Creators Update coming in spring next year is Redstone 2.

Ever since Microsoft announced Windows on ARM in 2012, there's been an immediate problem that prevents the port of the operating system from having mainstream appeal: it doesn't run Windows applications, because almost all Windows applications are compiled for x86 processors.

This isn't such a big deal for Windows on phones because phone applications have to be purpose-built to include a phone user interface, but it was one of the things that made Windows RT tablets, including Microsoft's own Surface, broadly undesirable. And even while it isn't an issue for phone apps per se, it limits Microsoft's ambitions somewhat with Windows Mobile's Continuum feature. With Continuum, a Windows Mobile phone can connect to a keyboard, mouse, and screen, and the phone can run desktop-style applications. Currently, Continuum is limited to running UWP applications; these apps can offer dual user interfaces, adapting to whether being used in phone mode or Continuum mode. It would be logical and obvious to extend this to allow true Windows desktop applications to run in Continuum mode—but that raises the x86/ARM incompatibility issue once more.

Read 4 remaining paragraphs | Comments



Read the full article here by Ars Technica

Microsoft's x86 on ARM64 Emulation: A Windows 10 Redstone 3 Fall 2017 Feature

Mary Jo Foley, reporting for ZDNet:Since January 2016 (and maybe before), there's been talk that Microsoft was working on bringing x86 emulation to ARM processors. Sources of mine are now saying that this capability is coming to Windows 10, though not until "Redstone 3" in the Fall of 2017. Here's why this matters: Microsoft officials continue to claim that Continuum -- the capability that will allow Windows 10 Mobile devices to connect to external displays and keyboards -- is going to be a key for the company, its partners and its customers. There's been one very big limitation to Continuum so far, however: It only allows users to run Universal Windows Platform (UWP), and not full-fledged x86 apps. What if an ARM64-based device could run x86 apps via emulation, the same way that the WOW (Windows on Windows) emulator allowed 32-bit apps to run on 64-bit Windows? That would make Windows 10 Mobile, which as of now, continues to support ARM only, and Continuum a lot more interesting, especially to business users who need certain Win32/line-of-business apps.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Wednesday, 23 November 2016

Playing FPS games with deep reinforcement learning

Playing FPS games with deep reinforcement learning Lample et al. arXiv preprint, 2016

When I wrote up ‘Asynchronous methods for deep learning’ last month, I made a throwaway remark that after Go the next challenge for deep learning systems would be to win an esports competition against the best human teams. Can you imagine the theatre!

Source: ‘League of Legends’ video game championship is like the World Cup, Super Bowl combined – Fortune: http://ift.tt/1O9T3HX

Since those are team competitions, it would need to be a team of collaborating software agents playing against human teams. Which would make for some very cool AI technology.

Today’s paper isn’t quite at that level yet, but it does show that progress is already being made on playing first-person shooter (FPS) games in 3D environments.

In this paper, we tackle the task of playing an FPS game in a 3D environment. This task is much more challenging than playing most Atari games as it involves a wide variety of skills, such as navigating through a map, collecting items, recognizing and fighting enemies, etc. Furthermore, states are partially observable, and the agent navigates a 3D environment in a first-person perspective, which makes the task more suitable for real-world robotics applications.

Lample and Chaplot develop an AI agent for playing death matches. I’m not really an FPS kind of person, and had no idea what a deathmatch was. It turns out to be a scenario in which the objective is to maximize the number of kills by a player/agent. Nice. The agent uses separate neural networks for navigation tasks and for action tasks. Experimentation is done using the VizDoom framework for developing AI bots that play Doom. It turns out there’s even been a recent VizDoom competition, with the ‘full deathmatch’ category won by a team from Intel Labs. Here’s a video of their entry in action:

Deep Recurrent Q-Networks

The core of the system is built on a DRQN architecture (Deep Recurrent Q-Network). A regular Deep-Q network, such as that used to play Atari games, receives a full (or very close to full) observation of the environment at each step. But in a game like DOOM where agent’s field of view is limited to 90 degrees centred around its position it only receives a partial observation.

In 2015 Hausknecht and Stone introduced Deep Recurrent Q-Networks which include an extra parameter at each step representing the hidden state of the agent. This can be accomplished by layering a recurrent neural network such as an LSTM on top of a normal DNQ network.

Two models

In a deathmatch, you need to explore the map to collect items and find enemies, and then you need to fight enemies when you find them. Lample and Chaplot use two networks, one for navigation, and one for action. The current phase of the game (and hence which model to use at any given time) is determined by predicting whether or not an enemy is visible in the current frame (action model if so, navigation model otherwise).

There are various advantages of splitting the task into two phases and training a different network for each phase. First, this makes the architecture modular and allows different models to be trained and tested independently… Furthermore, the navigation phase only requires three actions (move forward, turn left, and turn right), which dramatically reduces the number of state-action pairs required to learn the Q-function and makes training much faster. More importantly, using two networks also mitigates ‘camper’ behaviour, i.e. the tendency to stay in one area of the map and wait for enemies, which was exhibited by the agent when we tried to train a single DQN or DRQN for the deathmatch task.

Training

When trained using a vanilla DRQN approach, agents tended either to fire at will, hoping for enemies to wander into their crossfire, or not fire at all when given a penalty for using ammunition. This is because the agent could not effectively learn to detect enemies. To address this, the team gave the agent additional information that it could use during training (but not during actual gameplay or testing). At each training step, in addition to receiving a video frame, the agent received a boolean value for each entity (enemy, health pack, weapon, ammo and so on) indicating whether or not it appeared in the frame.

We modified the DRQN architecture to incorporate this information and to make it sensitive to game features. In the initial model, the output of the CNN is given to a LSTM that predicts a score for each action based on the current frame and its hidden state. We added two fully-connected layers of size 512, and k-connected to the output of the CNN, where k is the number of game features we want to detect… Although a lot of game information was available, we only used an indicator about the presence of enemies on the current frame.

Jointly training the DRQN model and the game feature detection allows the kernels of the convolutional layers to capture relevant information about the game with only a few hours of training needed to reach an optimal enemy detection accuracy of 90%.

The reward function for the action network includes:

  • positive rewards for kills
  • negative rewards for suicides
  • positive rewards for picking up objects
  • negative rewards for losing health
  • negative rewards for shooting or losing ammo

The navigation network was simply given a positive reward for picking up an item, and a negative reward for walking on lava.

A frame-skip of 4 turned out to be best overall balance between training speed and performance (the agent receives a screen input every 4+1 frames, and the action decided by the network is repeated over all the skipped frames). During back-propagation, only action states with enough history to give a reasonable estimation are updated.

Fighting! (evaluation)

Evaluation is done using the delightful kill to death ratio (K/D) as the scoring metric. Table 2 below shows how well the agent performed both on known maps (limited deathmatch) and on unknown maps (full deathmatch).

You can watch the agent play in these videos.

Here’s how it stacks up against human opposition:

The authors conclude:

In this paper, we have presented a complete architecture for playing deathmatch scenarios in FPS games. We introduced a method to augment a DRQN model with high-level game information, and modularized our architecture to incorporate independent networks responsible for different phases of the game. These methods lead to dramatic improvements over the standard DRQN model when applied to complicated tasks like a deathmatch. We showed that the proposed model is able to outperform built-in bots as well as human players and demonstrated the generalizability of our model to unknown maps. Moreover, our methods are complementary to recent improvements in DQN, and could easily be combined with dueling architectures (Wang, de Freitas, and Lanctot 2015), and priorized replay (Schaul et al. 2015).



Read the full article here by the morning paper