Friday, 9 September 2016

LLV8 Is An Experimental LLVM Compiler For V8 JavaScript

LLV8 is an experimental compiler for the V8 JavaScript Engine as shipped in Chrome, etc. LLV8 makes use of LLVM's MCJIT for code optimization and while it takes longer to compile this way, the generated code should be superior...

Read the full article here by Phoronix

The Android Runtime On Chrome OS Makes Use Of Wayland

With Google's Android Runtime for Chrome (ARC) it turns out that this technology for letting Android apps run on Chrome OS is making use of the Wayland protocol and could open up other Wayland clients to running on Chrome OS.

Readers in the

Phoronix Forums

pointed out that the ARC++ runtime makes use of Wayland, per a session description for this month's XDC2016 conference in Helsinki.

David Reveman of Google will be talking about ARC++ and the description reads:

ARC++ provides existing and future Chromebooks with the ability to run applications from the Google Play Store. The performance and seamless integration into ChromeOS achieved by ARC++ is in large part a result of a carefully designed graphics stack. Rendering, compositing and window management has been designed to fit together in a way that makes no compromise on performance or resource usage.

This talk will give an overview of the pieces that make up the ARC++ graphics stack, describe how Wayland is used as a compositor protocol and explain how we achieve the goal of running Android applications on ChromeOS with native graphics performance and window management capabilities expected from an application running on a Chromebook. It will also provide some basic pointers for running Wayland clients on ChromeOS, and future direction.

I haven't been following ARC too closely, but don't believe it's well known they are making use of Wayland. Android or Chrome OS itself otherwise hasn't relied upon Wayland up to this point.

This talk should be interesting and will happen in Helsinki with XDC2016 running from 21 to 23 September.

For those of you who don't recognize the name, David Reveman has long been involved in the open-source graphics scene but there hasn't been much from him in the past few years. Among his past accomplishments was leading the XGL/Compiz work in the early days at Novell, GPU acceleration for Mono's Moonlight, and more during his Novell tenure before joining Google a number of years ago.

Stay tuned for XDC2016 remote coverage on Phoronix later this month.



Read the full article here by Phoronix

Rocket League Finally Released For Linux

A game that many Linux gamers have been waiting months to see has finally been released on Steam for Linux: Rocket League...

Read the full article here by Phoronix

Thursday, 8 September 2016

Google To Buy Apigee For $625 Million To Expand Enterprise

An anonymous reader shares a Bloomberg report:Google is buying software development toolmaker Apigee for $625 million, the latest move by the search giant to bulk up its cloud-based offerings for businesses. Alphabet Inc.'s Google has agreed to pay $17.40 a share in cash, San Jose, California-based Apigee said in a statement Thursday. That's a 6.5 percent premium to Apigee's closing price Wednesday. The companies expect the deal to be completed by the end of the year. Apigee sells a platform that aids companies in managing their APIs, which are programming tools that help developers build software that talks to each other and shares information without revealing the underlying code. APIs have become an integral part of cloud software development, allowing one application to pull data and use services from multiple other programs. "The addition of Apigee's API solutions to Google cloud will accelerate our customers' move to supporting their businesses with high quality digital interactions,รข Diane Greene, senior vice president of Google's cloud business, said in a blog post, referring to application program interface products.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Wednesday, 7 September 2016

10 Jaw-Dropping Lego Mindstorms Projects

lego-mindstorms-projects

We all love Lego. Whether you’re a fan of the video games (Lego Dimensions is a particular favorite (Amazon US, CA, UK), or enjoy building kits of various themes and sizes, the enduring brick building toy continues to thrill children of all ages. Not just children, either. The Mindstorms range of Lego kits feature an “intelligent” programmable brick that can be used to extend the possibilities of the kit. We’ve previously featured one such kit in one of our most popular ever giveaways, and the Mindstorms range is becoming an increasingly popular tool for makers of all skill levels and interests. These 10...

Read the full article: 10 Jaw-Dropping Lego Mindstorms Projects



Read the full article here by MakeUseOf.com

Intel buys Movidius to build the future of computer vision

Intel is making it extra-clear that computer vision hardware will play a big role in its beyond-the-PC strategy. The computing behemoth has just acquired Movidius, a specialist in AI and computer vision processors. The Intel team isn't shy about its goals. It sees Movidius as a way to get high-speed, low-power chips that can power RealSense cameras in devices that need to see and understand the world around them. Movidius has already provided the brains behind gadgets like drones and thermal cameras, many of which are a logical fit for Intel's depth-sensing tech -- and its deals with Google and Lenovo give nothing to sneeze at, either.

This isn't Intel's first AI-related buyout. However, it shows that Intel is increasingly determined to find new avenues to explore now that it can't count on large jumps in processor performance to maintain the bottom line. An acquisition like this could make it the go-to source for chips in fields that are still growing quickly, such as robotics and smart homes.

Source: Intel, Movidius



Read the full article here by Engadget

Tuesday, 6 September 2016

ARM now arm-in-arm with SoftBank as Japan’s Brexit demands loom

AMD ships its extra-efficient 7th-generation processors in PCs

While Intel is busy revamping its laptop processors, AMD is focused on the desktop side of personal computing. The chip designer has started shipping its 7th-generation A-series processors in desktop PCs, starting with machines from HP and Lenovo. The CPUs are based around as many as four Excavator cores, rather than the coveted Zen cores you've heard about lately, but that should still get you a lot of performance per watt. If you believe AMD, its 35- and 65-watt processors deliver the kind of speed that previously took over 90 watts -- the A12-9800 is about as fast in a general computing benchmark (PCMark) as Intel's Core i5-6500, and roughly twice as fast in graphics (3DMark) if you're relying on integrated video.

As you might guess from the testing, visual performance plays a big role. On top of a newer DirectX 12-friendly graphics architecture, the new processors tout native video decoding for 4K video in both H.264 and H.265 formats, taking a large load off of your system while you're watching Ultra HD movies.

The efficiency angle is a familiar one for AMD, and not surprising given that it's the company's main advantage. You're still looking at higher-end Intel Core i5 and i7 chips if you're focused on raw performance in a desktop. With that said, this may be worthwhile if you want a glimpse at AMD's future. The 7th-gen A-series is the first processor line based on AMD's new AM4 platform and the interfaces that come with it, including support for USB 3.1 and NVMe solid-state drives. At least some of the technology you see here will carry on for multiple hardware generations.

Via: VentureBeat

Source: AMD (1), (2)



Read the full article here by Engadget

Monday, 5 September 2016

400,000 GitHub Repositories, 1 Billion Files, 14TB of Code: Spaces or Tabs?

Here's a debate that refuses to die: given a choice, would you rather use spaces or tabs? An episode of Silicon Valley last season had a bit on this. Now we have more data to analyze people's behavior. A Google developer has looked into 400,000 GitHub repositories -- 1 billion files, 14 terabytes to find that programmers with interest in specific languages do seem to prefer either tabs or spaces. Spoiler alert: space wins, like all the time.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Saturday, 3 September 2016

Gran Turismo Sport delayed to 2017

Taking a little longer to finish that last lap

Gran Turismo Sport won’t see release until sometime next year, Polyphony Digital announced this morning on the PlayStation Blog. Originally scheduled for a Nov. 15 launch in North America and Japan, the game has been pushed back so that the developer can spend "more time to perfect [it]."

"We do not want to compromise the experience in any way," creator Kazunori Yamauchi wrote. "While we cannot confirm a new release date at this time, we are more committed than ever to making GT Sport the best Gran Turismo game to date."

The delay comes even after Polyphony Digital canceled an open beta in order to cut down on development time. Both the demo and the game had been revealed last fall during Paris Games Week for a 2016 debut.

Gran Turismo Sport has been playable at recent events like E3 in June and Gamescom earlier this month, however, and it will continue to make the rounds at various events throughout the year. The latest entry in the successful racing series will be the first to hit PlayStation 4 whenever it does arrive.



Read the full article here by Polygon

New Google app wants to crowdsource your help... for free

On all mobile platforms -- and, indeed, desktop ones -- there are no end of apps and websites that can be used to earn money and other rewards in return for providing feedback. Google, however, has a different idea. The company has released a new app for Android called Crowdsource. The idea is to improve the likes of Google Maps and Google Translate with input from the likes of you and I. But rather than paying contributors -- or even offering any kind of incentive -- Google is rather cheekily looking for help completely gratis. Download a copy of the… [Continue Reading]


Read the full article here by Betanews

Now Google Cast is built into every Chrome browser

Like Gmail prior, it honestly seems like Google Cast has been in beta forever. Well, it's actually only been two years according to Google, and that test period ends now. Cast is directly built into Chrome as of today and anyone can use it without having to install or configure additional add-ons or extensions. Now everyone can throw individual browser tabs to your Chromecast -- or even your desktop -- in addition to services like Netflix or Google Play Music.

Maybe now that the software is out of beta we can get a totally new Chromecast that has stronger innards all around. Maybe. The folks who Google says watched and listened to 50 million hours of media using the HDMI dongle in the last month would probably appreciate it.

Source: Chrome Blog



Read the full article here by Engadget

Photo













Read the full article here by http://owlturd.com/

Friday, 2 September 2016

Cron Mail

Take THAT, piece of 1980s-era infrastructure I've inexplicably maintained on my systems for 15 years despite never really learning how it works.

Read the full article here by xkcd.com

What Gives With Insects Pretending to Be Sticks and Leaves?

Facebook opens its advanced AI vision tech to everyone

Over the past two years, Facebook's artificial intelligence research team (also known as FAIR) has been hard at work figuring out how to make computer vision as good as human vision. The crew has made a lot of progress so far (Facebook has already incorporated some of that tech for the benefit of its blind users), but there's still room for improvement. In a post published today, Facebook details not only its latest computer-vision findings but also announces that it's open-sourcing them to the public so that everyone can pitch in to develop the tech. And as FAIR tells us, improved computer vision will not only make image recognition easier but could also lead to applications in augmented reality.

There are essentially three sets of code that Facebook is putting on GitHub today. They're called DeepMask, SharpMask and MultiPathNet: DeepMask figures out if there's an object in the image, SharpMask delineates those objects and MultiPathNet attempts to identify what they are. Combined, they make up a visual-recognition system that Facebook says is able to understand images at the pixel level, a surprisingly complex task for machines.

"There's a view that a lot of computer vision has progressed and a lot of things are solved," says Piotr Dollar, a research scientist at Facebook. "The reality is we're just starting to scratch the surface." For example, he says, computer vision can currently tell you if an image has a dog or a person. But a photo is more than just the objects that are in it. Is the person tall or short? Is it a man or a woman? Is the person happy or sad? What is the person doing with the dog? These are questions that machines have a lot of difficulty answering.

In the blog post, he describes a photo of a man next to an old-fashioned camera. He's standing in a grassy field with buildings in the background. But a machine sees none of this; to a machine, it's just a bunch of pixels. It's up to computer-vision technology like the one developed at FAIR to segment each object out. Considering that real-world objects come in so many shapes and sizes as well as the fact that photos are subject to varying backgrounds and lighting conditions, it's easy to see why visual recognition is so complex.

The answer, Dollar writes, lies in deep convolutional neural networks that are "trained rather than designed." The networks essentially learn from millions of annotated examples over time to identify the objects. "The first stage would be to look at different parts of the image that could be interesting," he says. "The second step is to then say, 'OK, that's a sheep,' or 'that's a dog.'

"Our whole goal is to get at all the pixels, to get at all the information in the image," he says. "It's still sort of a first step in the grand scheme of computer vision and having a visual recognition system that's on par with the human visual system. We're starting to move in that direction."

By open-sourcing the project on GitHub, he hopes that the community will start working together to solve any problems with the algorithm. It's a step that Facebook has taken before with other AI projects, like fasText (AI language processing) and Big Sur (the hardware that runs its AI programs). "As a company, we care more about using AI than owning AI," says Larry Zitnick, a research manager at FAIR. "The faster AI moves forward, the better it is for Facebook."

One of the reasons Facebook is so excited about computer vision is that visual content has exploded on the site in the past few years. Photos and videos practically rule News Feed. In a statement, Facebook said that computer vision could be used for anything from searching for images with just a few keywords (think Google Photos) to helping those with vision loss understand what's in a photo.

There are also some interesting augmented reality possibilities. Computer vision could identify how many calories are in a photo of a sandwich, for example, or it could see if a runner has the proper form. Now imagine if this kind of information was accessible on Facebook. It could bring a whole new level of interaction to the photos and videos you already have. Ads could let you arrange furniture in a room or try on virtual clothes. "It's critical to understand not just what's in the image, but where it is," says Zitnick about what it would take for augmented reality applications to take off.

Dollar brought up Pokรฉmon Go as an example. Right now the cartoon monsters are mostly just floating in the middle of the capture scene. "Imagine if the creature can interact with the environment," he says. "If it could hide behind objects, or jump on top of them."

The next step would be to bring this computer-vision research into the realm of video, which is especially challenging because the objects are always moving. FAIR says that some progress has already been made: It's able to figure out certain items in a video, like cats or food. If this identification could happen in real time, then it could theoretically be that much easier to surface the Live videos that are the most relevant to your interests.

Still, with so many possibilities, Zitnick says FAIR's focus right now is on the underlying tech. "The fundamental goal here is to create the technologies that enable these different potential applications," he says. Making the code open-source is a start.

Source: Facebook



Read the full article here by Engadget