Wednesday, 14 June 2017

Greatly enhanced Google Backup and Sync to (mostly) replace Drive

Later this month -- June 28 specifically -- Google is to launch a new tool: Backup and Sync. Aimed primarily at consumers, the tool is designed to backup files and photos, and make them accessible from any device. Backup and Sync is described as "the latest version of Google Drive for Mac/PC, which is now integrated with the Google Photos desktop uploader," and for most users it will completely replace Drive. Corporate and enterprise users are advised to hold out until the release of Drive File Stream later in the year, but for everyone else, Backup and Sync opens up… [Continue Reading]


Read the full article here by Betanews

DIY Raspberry Neural Network Sees All, Recognizes Some

As a fun project I thought I’d put Google’s Inception-v3 neural network on a Raspberry Pi to see how well it does at recognizing objects first hand. It turned out to be not only fun to implement, but also the way I’d implemented it ended up making for loads of fun for everyone I showed it to, mostly folks at hackerspaces and such gatherings. And yes, some of it bordering on pornographic — cheeky hackers.

An added bonus many pointed out is that, once installed, no internet access is required. This is state-of-the-art, standalone object recognition with no big brother knowing what you’ve been up to, unlike with that nosey Alexa.

But will it lead to widespread useful AI? If a neural network can recognize every object around it, will that lead to human-like skills? Read on.

How To Do Object Recognition

Inception object recognizer internalsInception object recognizer internals

The implementation consists of:

  • Raspberry Pi 3 Model B
  • amplifier and speaker
  • PiCamera
  • momentary swtich
  • cellphone charger battery for the Pi

The heart of the necessary software is Google’s Inception neural network which is implemented using their TensorFlow framework. You can download it by following the TensorFlow tutorial for image recognition. The tutorial doesn’t involve any programing so don’t worry if you don’t know Python or TensorFlow. That is, unless you’re going to modify their sample code as I did.

 

classify_image.py printing that it saw a pandaclassify_image.py printing that it saw a panda

The sample code takes a fixed named file containing a picture of a panda and does object recognition on it. It gives the result by printing out that it saw a panda. But that wasn’t enough fun.

I hunted around for some text-to-speech software and found Festival. Now when it wants to say it saw a panda, I modified the sample code to run Festival in a linux shell and tell it to actually say “I saw a panda” to the speaker.

http://ift.tt/2sszWnP
But that still wasn’t fun enough. I connected a PiCamera to the Raspberry Pi, and had that take a photo and give it to the TensorFlow code to do object recognition. In the vernacular, it now ran inference on my photo.

And lastly, to make it all real easy I connected a momemtary switch to one of the Pi’s GPIO pins and took the photo when the momentary switch was pressed.

Here’s the Python program’s main() function before…

def main(_):
  maybe_download_and_extract()
  image = (FLAGS.image_file if FLAGS.image_file else
           os.path.join(FLAGS.model_dir, 'cropped_panda.jpg'))
  run_inference_on_image(image)

… and after.

def main(_):
  os.system("echo %s | festival --tts" % "Wait while I prepare my brain...")

  maybe_download_and_extract()
  # Creates graph from saved GraphDef.
  create_graph()

  # preparing for the switch
  GPIO.setmode(GPIO.BCM)
  GPIO.setup(17, GPIO.IN)

  camera = PiCamera()

  os.system("echo %s | festival --tts" % "I am ready to see things.")

  while True:
    # loop for the switch
    while (GPIO.input(17) == GPIO.LOW):
      time.sleep(0.01)

    # take and write a snapshot to a file
    image = os.path.join(FLAGS.model_dir, 'seeing_eye_image.jpg')
    camera.capture(image)

    os.system("echo %s | festival --tts" % "I am thinking about what you showed me...")
    human_string = run_inference_on_image(image)
    os.system("echo I saw a %s | festival --tts" % human_string)

The calls to os.system() are where I run the Festival text-to-speech program to make it say something to the speaker.

maybe_download_and_extract() is where Google’s Inception neural network would be downloaded from the Internet, if it’s not already present. By default, it downloads it to /tmp/imagenet which is on a RAM disk. The first time it did this, I copied it from /tmp/imagenet to /home/inception on the SD card and now run the program using a command line that includes where to find the Inception network.

Running the inception object recognizerRunning the inception object recognizer

The call to create_graph() was moved from inside the run_inference_on_image() function. create_graph() sets up the neural network, which you need do only once. Previously the program was a one-shot deal, but now it has an infinite while loop which calls run_inference_on_image() each time through the loop. Obviously, setting up the neural network is something you do only once (see our introduction to TensorFlow for more about graphs) so it had to be moved above the loop.

The run_inference_on_image() function is where the image is given to the neural network to do the object recognition. It used to just print out whatever it thought was in the image, but I modified it to instead return the text string containing what it thinks the object is, “coffee mug” for example. So the last line is where it would say “I saw a coffee mug” to the amplifier and speaker.

Boxing all that up gave me a small, standalone package that could be carried around and tried out by anyone. Here’s a video of it in action.

An improvement would be to add a small screen so that the user could see what the camera sees, but the PiCamera has a wide viewing angle and a screen turns out to be not necessary.

How Good Is Its Object Recognition

Inception seeing a tobacconistInception seeing a tobacconist

Showing it a cell phone often results in it saying it saw a cell phone, but sometimes an iPod. However, so far it has gotten water bottles and coffee mugs correct every time.

However, it doesn’t do well with people. Pointing it at me in my office causes it to say it saw a “tobacco shop, tobacconist shop, tobacconist”, probably due to the shelves of equipment and parts directly behind me. However, standing against a blank wall it said it saw a sweatshirt, removing that it saw a tee shirt, removing that, it said “bathing trunks, swim trunks”, despite seeing only my bare upper torso and head. (I’ll spare you the photo.)

The neural network is trained on a dataset called ImageNet, the version from the Large Visual Recognition Challenge of 2012. That dataset consists of a huge collection of images divided up into 1000 classes, each class containing images of a particular object. As you can see from this small sample from the cell phone class, some of the phone images are a little dated. However, objects such as coffee mugs don’t change over time.

But that didn’t stop everyone who played with it from having fun, walking around testing it on everything in sight, like finding a magic wand for the first time and waving it around to see what it could conjure.

Is That The Best You Can Do?

Well, first off, each recognition takes around 10 seconds on a Raspberry Pi 3 so either that has to be sped up or a faster processor used, preferably one with a CUDA-enabled Nvidia GPU since that’s the only type of GPU TensorFlow currently supports.

The Inception neural net is only as good as the data it’s trained on. The flaws I pointed out above regarding recognizing cell phones and people are issues with the ImageNet dataset. Only 3.46% of the time are all 5 of its best guesses wrong, whereas humans doing the same test are wrong in their 5 best guesses 5% of the time. Not bad.

As we pointed out in our article about the freaky stuff neural networks do today, Long Short Term Memory (LSTM) neural networks can examine what they see in a single frame of a video, while taking into account what came before in the video. For example, it has more confidence that it saw a beach ball instead of a basket ball if the preceeding scene was that of a beach party. That differs from the Inception neural network in that Inception has only the image you show it to go on.

Where Does This Get Us?

Will improved object recognition lead to widespread useful AI with human-like skills? The evolution of the eye is often cited as a major cause of the explosion in lifeforms known as the Cambrian explosion around 541 million years ago, though there is much debate about that being that cause.

When those eyes evolved, however, there was already some form of brain to use them. That brain already handled the senses of touch, vibration and smell. So improved object recognition alone wouldn’t cause a revolution. For human-like skills our AIs would need more intelligence. We currently have only bits and pieces of ideas of what we need for that.

What many agree on is that our AI would need to make predictions so that it could plan. For that it could have an internal model, or understanding, of the world to use as a basis for those predictions. For the human skill of applying a soldering tip to a wire, an internal model would predict what would happen when the tip made contact and then plan based on that. When the tip contacts the wire, if things don’t go as predicted then the AI would react.

Recent work from Facebook with Generative Adverserial Networks (GANs) may hint at a starting point here that contains such a model and predictive capability (if you’re not familiar with GANs, we again refer you to our article about the freaky stuff neural networks do today). The “generative” part of the name means that they generate images. But more specifically, these are deeply convoluted GANs, meaning that they contain an understanding of what they’ve seen in the images they’ve been trained on. For example, they know about windows, doors and TVs and where they go in rooms.

ADGL video predictionsADGL video predictions

What about making predictions? More work from Facebook involves video generation. Using Adversarial Gradient Difference Loss Predictors (AGDL) they predict what the next two frames of a video should be. In the photo of a billiards game you can see the ground truth, i.e. what really happened, and what the AGDL network predicted. It’s not very far into the future but it’s a start.

Those are at least small steps on the path from a naive object recognizer to one with human-like skills.

In Closing

Where might you have seen the Inception neural network recognizing objects before? We’ve covered [Lukas Biewald] using it on an RC car to recognize objects in his garage/workshop.

While this turned out to be fun for everyone to use as is, what other uses can you think of for it? What useful application can you think of? What can be added? Let us know in the comments below.



Read the full article here by Hack a Day

Tuesday, 13 June 2017

A video game trailer has been nominated for an Academy Award

Despite being one of the biggest entertainment mediums on the planet, it's rare to see video games getting the mainstream recognition that they deserve. Now, it looks like that's beginning to change, as, for the first time in history, a video game trailer has been nominated for an Academy Award. The promotional video in question is for this year's intriguing philosophical world simulator, Everything.

With its thought-provoking trailer running at almost 11 minutes long, the promo video highlights the interconnected nature of, well, everything. Overdubbed with a speech from renowned British philosopher, Alan Watts, the trailer uses in-game footage to show life in all its strange and wonderful forms. Thanks to its length and intriguing subject matter, this clever promotional video now stands a shot at winning an Academy Award for best animated short film.

Interestingly, This isn't the first time that the game's developer, David OReilly, has flirted with cinema. Using his gaming expertise and incorporating it into cinema, OReilly created the holographic video game scenes in Spike Jonze's brilliant sci-fi flick, Her.

For those who are curious about a game that lets you play as everything from an atom to a traffic cone, Everything is out now on PS4, PC and Mac. With the video already winning the Jury Prize for Animation at this year's Vienna Shorts film festival, perhaps it will go on to be the first ever video game trailer to win an Academy Award.

Source: Eurogamer



Read the full article here by Engadget

This is the GoPro Fusion: the company's all-in-one VR camera

GoPro surprised everybody when it teased its "Fusion" 360-degree action camera back in April. Mostly because GoPro rarely teases anything (Karma, perhaps being the exception), preferring to show up out of nowhere with a glossy release video and launch event. The half-reveal didn't tell us much, just that there was a camera coming in the fall, it had a resolution of 5.2K, a feature called "OverCapture" that would allow you to "punch out" a regular video from the spherical capture and well, that was kinda it.

Today the company is willing to show a lot more leg, as it were. In fact, GoPro is using the Mountain Games (which it sponsors) as a platform to show it to the world for the first time. We're here, and we're getting the chance to check it out for ourselves to see if it really is the "ultimate GoPro." In the meantime, here's a first look at the actual device itself.

We already knew from the teased image what the Fusion would look like. With its gunmetal gray exterior and grip-friendly textured edges, it's not dissimilar to the Hero5 Black. But there are some key differences. Obviously, the main one being that there are two lenses, one on the front and one on the rear.

Secondly, it's a shade bigger than the current Hero5 Black, it's also fully square, and not oblong like the teaser images implied. The two lenses are set into the body without the typical GoPro square/flat protection cap. This is so that the camera can get an unobstructed view through 180-degrees, and not just straight ahead of it. The units shown here in Vail are pre-production versions, but GoPro assures us this is the final design (bar perhaps some additional Fusion branding etc).

Despite having two "equal" sides, one of them has the classic small LCD display for showing what mode the camera is in. There are of course the usual power and highlight buttons too. We weren't able to get a look at the battery, yet, but hopefully we can get a sense on how much power it packs after some time with it. For now, here's what GoPro's Fusion 360 cam will look like!



Read the full article here by Engadget

Monday, 12 June 2017

How low-code software is meeting development challenges

There’s increasing demand to produce software quickly, which means companies are on the look out for alternatives to traditional coding. Many are turning to low-code platforms, workflow software specialist TrackVia has created an infographic charting the rise of low-code solutions. Among the highlights are that one in three companies are already using low-code solutions, and 43 percent are interested in using it. Its popularity is down to the fact that it addresses key challenges including compatibility with other software and apps, increased productivity and easier customization. Lower costs and the ability to offer mobile compatibility are seen as important too.… [Continue Reading]


Read the full article here by Betanews

Cancer Drug Proves To Be Effective Against Multiple Tumors

An anonymous reader writes: 86 cancer patients were enrolled in a trial of a drug that helps the immune system attack tumors. Though they had different kinds of tumor -- pancreas, prostate, uterus or bone -- they all shared a genetic mutation that disrupts their cells' ability to fix damaged DNA, found in 4% of all cancer patients. But tumors vanished and didn't return for 18 patients in the study, reports the New York Times, while 66 more patients "had their tumors shrink substantially and stabilize, instead of continuing to grow." The drug trial results were "so striking that the Food and Drug Administration already has approved the drug, pembrolizumab, brand name Keytruda, for patients whose cancers arise from the same genetic abnormality. It is the first time a drug has been approved for use against tumors that share a certain genetic profile, whatever their location in the body." The researchers say that just in the U.S. there are 60,000 new patients every year who could benefit from the new drug.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Sunday, 11 June 2017

Lullaby: Google's New C++ Libraries For VR/AR

Google's newest open-source project is called Lullaby.

Lullaby is a set of C++ libraries designed for developing VR (virtual reality) and AR (augmented reality) games/applications. These libraries are designed for high-performance VR/AR, supports full 3D VR environments, provides a common set of widgets and a complete animation system, a Java-based API for integrating with Android apps, and is used by many teams in Google. Android, iOS, Linux, and Windows are fully-supported.

Among the teams at Google already using Lullaby are VR Home, the Play Store, YouTube, Play Movies, and Google Earth.

More details on Lullaby via

GitHub

.



Read the full article here by Phoronix

Docker's LinuxKit Launches Kernel Security Efforts, Including Next-Generation VPN

darthcamaro writes: Back in April, when Docker announced its LinuxKit effort, the primary focus appeared to just be [tools for] building a container-optimized Linux distribution. As it turns out, security is also a core focus -- with LinuxKit now incubating multiple efforts to help boost Linux kernel security. Among those efforts is the Wireguard next generation VPN that could one day replace IPsec. "Wireguard is a new VPN for Linux using the cryptography that is behind some of the really good secure messaging apps like Signal," said Nathan McCauley, Director of Security at Docker Inc. According to the article, Docker also has several full-time employees looking at ways to reduce the risk of memory corruption in the kernel, and is also developing a new Linux Security Module with more flexible access control policies for processes.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Intel Threatens to Sue Qualcomm over Windows 10

Intel published a blog post earlier this week concerning x86's upcoming 40th anniversary, and one portion, "Protecting x86 ISA Innovation," includes veiled warnings to Qualcomm and Microsoft concerning their ARM collaboration: "...there have been reports that some companies may try to emulate Intel's proprietary x86 ISA without Intel's authorization." It seems obvious that Intel is unhappy about Windows 10 PCs running Snapdragon 835 chips with x86 emulation, and some think an epic legal battle may be on the way if Qualcomm doesn't agree to licensing. This author opinions that the true purpose of Windows 10 on ARM is ensuring the PC market is not hamstrung by an Intel monopoly. "Intel carefully protects its x86 innovations, and we do not widely license others to use them," the post continues, suggesting two things. That Qualcomm has not licensed Intel's "x86 innovations." And that Intel isn't particularly inclined to do so, given that Qualcomm's entry into the PC market will negatively impact Intel's already-falling PC chips sales. Furthermore, Intel has aggressively pursued those companies that do violate its intellectual property. And it isn't shy about reminding us of the results. "Over the past 30 years, Intel has vigilantly enforced its intellectual property rights against infringement by third-party microprocessors," the two explain. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Friday, 9 June 2017

Mozilla's WebExtensions APIs allow for cross-browser extensions in Chrome, Opera, Firefox and Edge

Creating add-ons for multiple web browsers can be a nightmare for developers, but Mozilla wants to simplify things. By introducing WebExtensions APIs it is going to become much easier to make extensions that work in multiple web browsers with only minor changes. Based on HTML, CSS, and JavaScript, the aim is to further standardize the APIs to make it even easier to share extensions between browsers. Mozilla is now pushing the technology in the hope that it will gain momentum and a growing following. In a post on its Hacks blog, Mozilla explains that WebExtensions enables developers to "write one… [Continue Reading]


Read the full article here by Betanews

Tuesday, 6 June 2017

Four short links: 6 June 2017

P2P Browser, Learning RTS, Growth Hacking, and Apple's ARKit

  1. Beaker Browser -- A peer-to-peer web browser. Create and fork websites directly from your computer. You control your data. No blockchain required.
  2. TorchCraft -- a library that enables deep learning research on real-time strategy (RTS) games such as "StarCraft: Brood War" by making it easier to control these games from a machine learning framework—here, Torch. This white paper argues for using RTS games as a benchmark for AI research and describes the design and components of TorchCraft. Code on GitHub.
  3. GrowthHackers Conference Notes -- notes from all the sessions on the recent GrowthHackers Conference.
  4. Apple ARKit -- augmented reality framework for iPhone and iPad. I'm looking forward to their hardware entry, for which this greases the runway.

Continue reading Four short links: 6 June 2017.



Read the full article here by Four Short Links - O'Reilly Media

A Look at Eevee: Blender's Realtime Rendering Engine

It looks like you're using an ad blocker! I really need the income to keep this site running.

If you enjoy BlenderNation and you think it’s a valuable resource to the Blender community, please take a moment to

read how you can support BlenderNation

.

A Look at Eevee: Blender's Realtime Rendering Engine

0

Remington Graphics takes a look at Blender's new Eevee engine, and specifically at the PBR capabilities.

VIDEO

About Author

Bart Veldhuizen

I have a LONG history with Blender - I wrote some of the earliest Blender tutorials, worked for Not a Number and helped run the crowdfunding campaign that open sourced Blender (the first one on the internet!). I founded BlenderNation in 2006 and have been editing it every single day since then ;-)



Read the full article here by BlenderNation

Sunday, 4 June 2017

Chrome To Deprecate PNaCl, Embrace New WebAssembly Standard

An anonymous reader quotes Tom's Hardware Google announced that its Portable Native Client (PNaCl) solution for making native code run inside the browser will be replaced by the new cross-browser web standard called WebAssembly... Even though Google open sourced PNaCl, as part of the Chromium project, Mozilla ended up creating its own alternative called "asm.js," an optimized subset of JavaScript that could also compile to the assembly language. Mozilla thought that asm.js was far simpler to implement and required no API compatibility, as PNaCl did. As these projects seemed to go nowhere, with everyone promoting their own standard, the major browser vendors seem to have eventually decided on creating WebAssembly. WebAssembly can give web apps near-native performance, offers support for more CPU features, and is simpler to implement in browsers and use by developers.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Friday, 2 June 2017

Germany Detects Emissions Cheat Software In Audi Models

The German government has accused Audi of cheating emissions tests with its top-end models, marking the first time the company has been accused of such wrongdoing in its home country. Reuters reports: The German Transport Ministry said it has asked Volkswagen's (VOWG_p.DE) luxury division to recall around 24,000 A7 and A8 models built between 2009 and 2013, about half of which were sold in Germany. The affected Audi models with so-called Euro-5 emission standards emit about twice the legal limit of nitrogen oxides when the steering wheel is turned more than 15 degrees, the ministry said. It is also the first time that Audi's top-of-the-line A8 saloon has been implicated in emissions cheating. VW has said to date that the emissions-control software found in its rigged EA 189 diesel engine does not violate European law. The 80,000 3.0-liter vehicles affected by VW's emissions cheating scandal in the United States included Audi A6, A7 and Q7 models as well as Porsche and VW brand cars. The ministry said it has issued a June 12 deadline for Audi to come up with a comprehensive plan to refit the cars. Ingolstadt-based Audi issued a recall for the 24,000 affected models late on Thursday, some 14,000 of which are registered in Germany, and said software updates will start in July. It will continue to cooperate with Germany's KBA motor vehicle authority, Audi said.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Photo









Read the full article here by http://owlturd.com/

Thursday, 1 June 2017

Your Face or Fingerprint Could Soon Replace Your Plane Ticket

Headed on a trip? You may soon be able to ditch your boarding pass in favor of your fingers or face. From a report: Delta announced, on Wednesday, a new biometric identification pilot program that will eventually let you use your fingerprints instead of a plane ticket (Editor's note: the link could be paywalled; alternative source). That followed a JetBlue announcement hours earlier that it is testing a program in Boston that will match pictures of customers' faces with the passport database maintained by U.S. Custom and Border Protections. Delta's program, which kicked off at Washington's Reagan National Airport, is in partnership with Clear, a company that already lets customers skip to the front of security lines without identification.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot