Monday, 24 April 2017

Official Launch Of The Asus Tinker Board

Earlier this year, a new single board computer was announced, and subsequently made its way onto the market. The Tinker Board was a little different from the rest of the crop of Raspberry Pi lookalikes, it didn’t come from a no-name company or a crowdfunding site, instead it came from a trusted name, Asus. As a result, it is a very high quality piece of hardware, upon which we remarked when we reviewed it.

Unfortunately, though we were extremely impressed with the board itself, we panned the Asus software and support offering of the time, because it was so patchy as to be non-existent. We had reached out to Asus while writing the review but received no answer, but subsequently they contacted us with a sorry tale of some Tinker Boards finding their way onto the market early, before their official launch and before they had put together their support offering. We updated our review accordingly, after all it is a very good product and we didn’t like to have to pan it in our review.

This week then, news has come through from Asus that they have now launched the board officially. There is a new OS version based on Debian 9, which features hardware acceleration for both the Chromium web browser and the bundled UHD media player. There is also an upcoming Android release though it is still in beta at time of writing and there is little more information.

The Tinker Board is one of the best of the current crop of Raspberry Pi-like single board computers, and it easily trounces the Pi itself on most counts. To see it launched alongside a meaningful software and support offering will give it a chance to prove itself. In our original review we urged tech-savvy readers to buy one anyway, now it has some of the backup it deserves we’d urge you to buy one for your non-technical family members too.


Filed under: computer hacks

Read the full article here by Hack a Day

Sunday, 23 April 2017

Android 6.0.1 Released for Asus Tinker Board

Android 6.0.1 Released for Asus Tinker Board

Apr 23, 2017, 06:00 (0 Talkback[s]) (Other stories by OSSBlog)

Asus has now made available their first release of the Android operating system on the Asus Tinker Board. Asus has labelled the release as TinkerOS_Android V13.11.0.2 (Beta version). It???s a release of Android 6.0.1 running on kernel 3.10.0.

Complete Story



Read the full article here by Linux Today

Saturday, 22 April 2017

Oh, the 70's



Read the full article here by turnoff.us - geek comic site

Russia: 'Rage rooms' in Moscow a smashing way to reduce stress

Stress is a fact of life for many of us. Some pop a pill, or perhaps meditate to ease the anxiety.In Russia, some let off steam in a very different way...(Read...)



Read the full article here by Likecool

Twitch kicks off Science Week by streaming Sagan's 'Cosmos'

Prepare to see all kinds of science-y streams on Twitch next week. The streaming platform is holding a week-long celebration of all things science, starting with a marathon of Carl Sagan's Cosmos: A Personal Voyage. Twitch's Cosmos channel will broadcast all 13 episodes of the series twice -- the first one will begin on April 24th, 3PM Eastern, while the second run will start airing on April 27th, 5PM Eastern. Creators will also be able to co-stream the show, though, so check around if you want to hear some modern commentary on top of Sagan's dulcet tones.

After the second broadcast, the official Twitch channel will air a live Q&A with Ann Druyan, Cosmos co-creator and Sagan's wife. Druyan said she's "truly excited to share Cosmos... with the vast Twitch community." Her husband "wanted to tear down the walls that exclude most of us from the scientific experience," after all, "so that we could take the awesome revelations of science to heart." She added that "the power of the original Cosmos series, with its enduring appeal to every generation since, is evidence of how much we hunger to feel our connection to the universe."

The marathon and Druyan's interview aren't the only things you can look forward to. As part of Science Week, Twitch is also interviewing quite a lengthy list of prominent personalities in the field, including:

  • Matthew Buffington – Director of Public Affairs at NASA's Ames Research Center in Silicon Valley and host of NASA in Silicon Valley podcast
  • Ariane Cornell -- Head of Astronaut Strategy and Sales and Head of North American Sales for the New Glenn Rocket at Blue Origin
  • Scott Manley -- Astronomer and online gaming personality under the handle, "Szyzyg," best known for video content about science and video games like Kerbal Space Program
  • Pamela Gay -- Astronomer and Principal Investigator of CosmoQuest, a citizen science facility, and the Director of Technology and Citizen Science at the Astronomical Society of the Pacific.
  • Kishore Hari -- Science educator and director of Bay Area Science Festival, based out of the University of California, San Francisco, best known as one of the lead organizers of the global March for Science
  • Fraser Cain -- Publisher of Universe Today, one of the most visited space and astronomy news websites on the internet, which he founded in 1999. He's also the co- host of the long running Astronomy Cast podcast with Dr. Pamela Gay. Fraser is an advocate for citizen science in astronomy, and on the board of directors for Cosmoquest, which allows anyone to contribute to discoveries in space and astronomy.
  • EJ_SA -- Streaming on Twitch since December 2012, EJ_SA has been focused on showing viewers the seemingly magical accomplishments of past, present, and future space programs in KSP. This is where Space Shuttles fly, Rockets land, and Space Stations are built! Interactive chat, questions answered and weird facts about space all come together here!
  • Phil Plait -- Astronomer and science communicator. He writes the Bad Astronomy blog for Syfy Wire, was the head science writer for the new Netflix show Bill Nye Saves the World!, and is the science consultant on the science fiction mini-series Salvation coming out in the summer of 2017. He is a tireless promoter of science and lives to share his joy for the natural world.

Source: Twitch



Read the full article here by Engadget

Science in America - Neil deGrasse Tyson

Science in America - Neil deGrasse Tyson..(Read...)



Read the full article here by Likecool

Epic Wants Architects to Use Unreal Engine as a Design Tool

Epic is ready to unleash their Unreal Engine upon the architectural design community. Using visual tools, architects can convey ideas to customers by showing them the exact structures that they envisioned. When VR is added to the mix, customers can stand in the middle of a structure and see how lighting affects the model for example. Thought a statue out front of the stadium was a good idea? Drop a virtual statue there, scale it up to the exact size you wish, and see how it works visually and emotionally in the area. Scale models cost a lot of money to create and they take up even more precious space in a design studio. Drawings are 2D, and can't compare to a scale model or VR presentation. By being able to show clients in VR what the architect's ideas are for a home, business, etc; lots of time and money can be saved. If a design element just seems out of place, then it can easily be changed in Unreal Engine without costing the customer or architect a penny. Here is a video from Fabrice Bourrelly who is scheduled to be an integral part of the webinar demonstration. The company is planning four free webinars with the first one kicking off April 2. That event will feature architect Fabrice Bourrelly, who will go over some of the reasons to use Unreal for architecture. Register for the webinar here. "Using models of Philip Johnson's Glass House and Tadao Ando's Church of Light, Fabrice will illustrate how Unreal Engine brings the emotion, mood and atmosphere of offline-quality rendering to the real-time, immersive and interactive experience of virtual and augmented reality," Unreal community representative Chris Ruffo wrote in a blog post. "Fabrice is an architect, artist, and 3D visualizer who has become a leading user and teacher of Unreal Engine over the past year. Fabrice's client list includes Google, IDEO, Thomas Heatherwick, Anish Kapoor, Bentley Motors, and Philippe Starck." Discussion

Read the full article here by [H]ardOCP News/Article Feed

Wednesday, 19 April 2017

Audi teases its EV ambitions with the E-Tron Sportback

Audi's E-Tron electric car lineup isn't coming until next year, but in the meantime it's giving us a taste with the E-Tron Sportback Concept. It's debuting the electric SUV at the Auto Shanghai show in China, but it's not a flight-of-fancy show car. "Following close on its heels, in 2019 comes the production version of the Audi E-Tron Sportback," Audi Chairman Rupert Stadler said in a press release.

The upcoming EV has the looks and specs to take on Tesla's Model X. It can go 311 miles on a charge thanks to a liquid cooled 95-kWh lithium-ion battery, and has three electric motors with the equivalent of 429 horsepower (boostable to 496). That can take it from 0-62 mph in 4.5 seconds -- not as quick as Tesla's "ludicrous" 2.3 seconds for the Model S, but still stupidly fast.

The EV has typical concept details like side cameras instead of mirrors, a front fascia that can project animations onto the road and hyper-imposing 23-inch wheels. The "functional, reductive" interior features futuristic touch displays on the center console, dash and even the doors to display the side-camera info. The styling screams "electric car" for owners who want to show off their green bonafides -- as the gallery shows, however, the original design sketches were even more outré.

Audi said the E-Tron Sportback's Shanghai debut is a nod to China being the world's largest EV market. "There are already about 150,000 charging stations in the country, with another 100,000 due to come on stream by the end of 2017," says Audi's Dietmar Voggenreiter. "In the next five years we will be offering five E-Tron models in China, including purely battery-powered vehicles with ranges well in excess of 500 km (311 miles)."

Given that Audi will launch the E-Tron Sportback for real in less than two years, expect the production model to look much the concept, minus things like the far-out interior, side cameras and massive wheels. We'd guess it'll be priced similarly to Tesla's Model X, but we'll have a better idea once Audi unveils its first E-Tron car sometime in 2018.

Via: Autoblog

Source: Audi



Read the full article here by Engadget

Microsoft turns two-factor authentication into one-factor by ditching password

Original article

Facebook Unveils Two New VR Cameras With ‘Six Degrees of Freedom’

Caffe2: A New, Open-Source Deep Learning Framework From Facebook

Facebook just announced Caffe2, a new deep learning framework developed in cooperation with NVIDIA and other vendors.

Facebook announced minutes ago from their F8 developer conference Caffe2 as a new open-source framework for deep learning.

Caffe2 was also announced on its new website,

Caffe2.ai

. "

Caffe2 is shipping with tutorials and examples that demonstrate learning at massive scale which can leverage multiple GPUs in one machine or many machines with one or more GPUs. Learn to train and deploy models for iOS, Android, and Raspberry Pi. Pre-trained models from the Caffe2 Model Zoo can be run with just a few lines of code. Caffe2 is deployed at Facebook to help developers and researchers train large machine learning models and deliver AI-powered experiences in our mobile apps. Now, developers will have access to many of the same tools, allowing them to run large-scale distributed training scenarios and build machine learning applications for mobile.

"

Among the vendors working with Facebook on Caffe2 are NVIDIA, Qualcomm, Intel, Amazon, and Microsoft. NVIDIA has already put out

a Caffe2 blog post

with some initial numbers. The BSD-licensed code can be found on

GitHub

.



Read the full article here by Phoronix

Monday, 17 April 2017

Microsoft Is Experimenting with Tabs in File Explorer and Other Apps on Windows 10

Wow, it only took 20 years. Tabbed browsing is finally coming to File Explorer as part of a bigger initiative called "Tabbed Shell," which will bring tab functionality to any and all Windows apps. I guess the days of having multiple instances of the same program taking up your taskbar will soon be over. …Tabbed Shell is a feature being worked on at an OS level, and doesn't require work from app developers to take advantage of it. By default, Tabbed Shell works with any app window, whether it be Photoshop, File Explorer, or Microsoft Word. Any UWP, Win32 or Centennial app will work. Much like in Edge, you'll find a tabbed interface at the top of a window where you can switch between instances of the same app. This means that right away, any app can take advantage of the new tabbing experience without any developer work. If an app features a titlebar, it will be able to function with Tabbed Shell. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Thursday, 13 April 2017

Google Image Search gets fashion-conscious with 'style ideas' on Android and the web

You'd be forgiven for thinking that Google is entirely pre-occupied with fake news and the fact-checking thereof these days, but there are still rather more interesting changes and additions being made to search. The latest new feature is "style ideas" which makes its way to the web and Android today. Google says that the aim of the feature is to help "boost your search style IQ" -- because, after all, "when it comes to fashion, it’s hard to know where to start." What this means in practice is that when Image Search is used to track down a particular product,… [Continue Reading]


Read the full article here by Betanews

Renesas Unveils Its Open Autonomous Vehicle Platform


Tokyo-based Renesas Electronics announced Monday that it is launching a new open platform for advanced driving assistance (ADAS) and autonomous driving systems. 

Dubbed Renesas autonomy, the platform will employ ADAS and autonomous driving technologies being developed by Renesas and a number of partners, including AutonomousStuff, Cogent Embedded, Polysync, eTrans, and the University of Waterloo in Ontario, Canada. In addition, the platform is using technologies produced by the R-Car Consortium, that Renesas established in 2010. The Consortium now has 195 entities, including hardware manufacturers, software companies, and research institutes as members, and include NEC, Hitachi and QNX Software Systems. 

These alliances have produced, among other things, Renesas’ Lincoln (model MKZ) demonstration car for autonomous driving, which was showcased at the Consumer Electronics Show in Las Vegas in January.

“The Renesas autonomy Platform offers end-to-end solutions scaling from secure cloud connectivity and sensing to vehicle control,” says Uwe Westmeyer, Principal Engineer at Renesas Global ADAS Centre in Dusseldorf, Germany. “It connects everything we are offering under ADAS and autonomous driving. This will help customers to reduce integration efforts.”

Accompanying the announcement, Renesas released its first product under the Renesas autonomy brand: the R-Car V3M image recognition system-on-chip (SoC) targeting smart camera applications. The chipmaker, which claims to be the world’s leading supplier of SoCs and microcontroller units, shipped almost one billion units to the automotive industry alone in 2015.

The company noted that the new sensor incorporates an image signal processor (ISP), which frees up circuit-board space and reduces system-manufacturing costs. In addition, the device complies with the ISO26262 safety standard for electronic systems.

“The R-Car V3M is a good example of our strategy,” said Westmeyer. “It is based on discussions we had with Tier 1 [companies] and OEMs. It will help our customers develop leading-edge, cost-efficient smart camera applications, surround view systems, even lidars.”

Renesas exhibited the new SoC, its autonomy platform, and the Lincoln demonstration car at its DevCon Japan trade fair held in Tokyo on April 11.  

During his keynote speech at the fair, Renesas President Bunsei Kure revealed impetus for the R-Car Consortium and autonomy platform when he admitted that it was “difficult to survive on our own,” in such a competitive industry. He added that Renesas was working with so many partners to ensure “there will still be a semiconductor manufacturer left in Japan that can supply the automotive industry.”

No doubt on Kure's mind is smartphone chipmaker Qualcomm’s bid to buy NXP Semiconductors NV in the Netherlands, a leading supplier of chips to the automotive industry. The acquisition will cost a reported $47 billion, the biggest ever in the semiconductor industry. Qualcomm, which is seeking to diversify away from smartphones, is waiting on the approval of US antitrust regulators to complete its purchase.

Meanwhile, Intel is also seeking to gain ground in the automotive industry. It has agreed to purchase Mobileye, an Israeli supplier of software safety products for the ADAS and autonomous vehicle markets.

Faced with this kind of international competition, Renesas is emphasizing its autonomy platform as open, given that customers can choose where to start ADAS and autonomous driving development based either in part on their own differentiating technologies, or on the technologies provided by the Renesas autonomy platform and R-Car Consortium.



Read the full article here by Computing: IEEE Spectrum

Defeat the Markup: Iphone Built by Cruising Shenzhen


[Scotty Allen] from Strange Parts, has just concluded a three month journey of what clearly is one of the most interesting Shenzhen market projects we have seen in a while. We have all heard amazing tales, pertaining the versatility of these Chinese markets and the multitude of parts, tools and expertise available at your disposal. But how far can you really go and what’s the most outrageous project can you complete if you so wished? To answer this question, [Scotty] decided to source and assemble his own Iphone 6S, right down to the component level!

The journey began by acquiring the vehemently advertised, uni-body aluminium back, that clearly does not command the same level of regard on these Chinese markets when compared to Apple’s advertisements. [Scotty’s] vlog shows a vast amount of such backings tossed as piles in the streets of Shenzhen. After buying the right one, he needed to get it laser etched with all the relevant US variant markings. This is obviously not a problem when the etching shop is conveniently situated a stones throw away, rather simplistically beneath a deck of stairs.

Next came the screen assembly, which to stay true to the original cause was purchased individually in the form of a digitizer, the LCD, back-light and later casually assembled in another shop, quicker than it would take you to put on that clean room Coverall, you thought was needed to complete such a job.

[Scotty] reports that sourcing and assembling the Logic board proved to be the hardest part of this challenge. Even though, he successfully  purchased an unpopulated PCB and all the Silicon; soldering them successfully proved to be a dead end and instead for now, he purchased a used Logic board. We feel this should be absolutely conquerable if you possessed the right tools and experience.

All the other bolts and whistles were acquired as separate components and the final result is largely indistinguishable from the genuine article, but costs only $300. This is not surprising as Apple’s notorious markup has been previously uncovered in various teardowns.

Check out [Scotty’s] full video that includes a lot of insight into these enigmatic Shenzhen Markets. We sure loved every bit of it. Now that’s one way get a bargain!



Read the full article here by Hack a Day

Get 'Linux: Embedded Development' ebook ($63 Value) FREE for a limited time

An embedded system is a device with a computer inside that doesn't look like a computer. Washing machines, televisions, printers, cars, aircraft, and robots are all controlled by a computer of some sort, and in some cases, more than one. As these devices become more complex, and as our expectations of the things that we can do with them expand, the need for a powerful operating system to control them grows. The Linux: Embedded Development ebook from Packt Publishing will tell you everything you need to know to leverage the power of Linux to develop captivating and powerful embedded Linux… [Continue Reading]


Read the full article here by Betanews

The Future of Real-Time Rendering with Lumberyard

At GDC 2017, we had an amazing opportunity to see some closed-door demos that Amazon Lumberyard was showing to the press. One demo was entirely devoted to achieving better, more realistic rendering in real time.

Rendering has developed considerably over time. We’ve made huge leaps from Pixar’s pre-rendered experiments within the movie industry in 1987 to modern day 3D, real-time rendered video games like Destiny and For Honor. However, there are still issues with jaggies, broken shadows, specular aliasing, and texture noise that need to be addressed within the rendering world. These issues have become the sworn enemies of Hao Chen, Senior Principal Engineer with Amazon.

Over the years, Chen has worked on solving complex problems with video game rendering. He was the Senior Graphics Architect at Bungie’s Destiny, where he worked alongside some of the best real-time graphics engineers in the industry. If you’re interested in his work, please check out an interview he conducted for GDC 2017.

We often sacrifice pixel quality. PC games usually have that typical ‘PC game look.’ Very noisy, very aliasing. Jaggies. Gamers actually came to accept these defects. But, I think now is the time to tackle pixel quality because the power efficiency of the geometry engine is very, very good and it’s going to help improve the throughput.

If you take a look at my career, at all the games I’ve worked on over the years, you can see that the throughput progressed immensely. We’ve observed a 5000x increase of polygon count. In terms of pixel resolution, we have 54x more pixels. And it’s still growing with the introduction of 4K and 8K displays.

Hao Chen 

However, this incredible progress doesn’t solve the most basic issues with pixel quality, which are jaggies, broken shadows, messy hair, noisy textures, pixelated foliage, strobing highlights, and more. And these defects are especially irritating for artists who want to get that perfect V-Ray look in their real-time renders but simply can’t.

Some believe that frame rate and higher resolution are the only aspects of rendering that truly matter to gamers and that they don’t notice other defects. This, however, is not entirely true. First, hiding some of these defects will hurt the throughput, and second, these defects are bad for VR experiences.

But, it’s fairly straightforward to implement these changes to get rid of these digital artifacts. Rather than requiring individuals to remake content, all developers would need to have is the right engine to do the job. And Amazon believes Lumberyard is the solution.

To show these improvements, we’ve picked a scene called The Bistro. It demonstrates our latest renderer, which runs on DX12 with HDR and features a lot of pixel quality improvements.

Hao Chen 

Let’s look at some of the math behind it and figure out what is actually aliasing. From a signal processing perspective, there are a number of steps the signal takes to get to our eyes from a computer game: discretization, sampling, reconstruction, and “DA” (display) conversion. And each of these steps can actually cause digital artifacts. The main source of aliasing is not sampling the signal at a high enough frequency. This causes the wheel to go backward in this Youtube Video.

As the wheel starts turning faster and faster, the video frame is still sampling at 30 frames per second. One starts to see that false disguise signal, which makes the wheel in the video start slowly spinning backwards.

There are only two strategies that can help you combat aliasing: taking more samples and pre-filtering the input signal. In film, we generally see plenty of samples and expensive filtering. It may take an hour or even days to render a single frame. Games, on the other hand, can’t work like that. They have to hit 60 FPS. So, games use fewer samples and very cheap filtering. These two factors are the source of jaggies seen in games.

So, how many samples are enough for games? Where do you place these samples? How do we pre-filter the sample? What’s the support of the filter? These are hard questions that need answers, and Amazon Lumberyard has worked with NVIDIA Research to find these answers to eliminate bad pixels from the rendering world.

When we first started out, we wanted to get the coolest and latest techniques as well as work with the best researchers in the field. We found Yves Lafont’s team, and we implemented a number of the latest techniques from NVIDIA Research in Lumberyard.

Hao Chen

To come up with the solution, Amazon Lumberyard first had to look at the existing techniques of anti-aliasing.

For Lumberyard, the development team picked up a special form of Temporal AA.

Temporal AA is very cool. Instead of spreading the sample signal spatially, it spreads the signal over time. You render some samples one frame, render some other samples other frame and you blend them together. This is both efficient and it solves temporal artifacts.

Hao Chen

Below is how this solution works.

To battle ghosting, Lumberyard relied on a new technique from NVIDIA Research, which was introduced last year.

This approach allows for less ghosting and helps to achieve particularly interesting results. Have a look at this video.

On the left, you can see the Temporal AA – Variance Clipping, and on the right you can see the original technique. It does look pretty amazing. The picture is very smooth and detailed while containing no blurring or jaggies. But that’s not all guys!

Another technique the company uses in the engine’s renderer is designed to fight specular aliasing.

There’s a number of specular aliasing techniques, but for their engine the Lumberyard’s team used Anton Kaplanyan’s most recent work (published in 2017). And here’s an example of what this technology lets individuals do. On the left, you see the shot with specular anti-aliasing on and on the right it’s turned off. This technology even takes into consideration the curvature of the geometry to blur the specular highlight.

One of the final advancements made in the the world of rendering is Order-independent Transparency. Here’s a quick overview of the problem and the solution from Marco Salvi, who discussed these topics in Lumberyard’s GDC 2017 presentation.

In the end, all these techniques combined were instrumental in achieving a very high level of visual fidelity for Amazon Lumberyard. With global illumination on, PBR engaged, and very complex scenes loaded, the images we got were almost movie like!

Although not entirely there yet, it does look incredible and is reminiscent of beautiful visuals as seen from a handful of Sony Computer Entertainment games (such as The Order: 1886). And it looks to be a very promising area for exploration within the artistic community. We’d love to see its potential explored by industry leaders to see how it works with various environments and different scenes. We believe the open world aspect of the engine will benefit from these new developments.


© a.sergeev for 80lvl, 2017. | Permalink | No comment | Add to del.icio.us
Post tags: , , , , ,

Feed enhanced by Better Feed from Ozh



Read the full article here by 80lvl

Tuesday, 11 April 2017

Microsoft Finally Reveals What Data Windows 10 Really Collects

Starting today, Microsoft is updating its privacy statement and publishing information about the data it collects as part of Windows 10. From a report: "For the first time, we have published a complete list of the diagnostic data collected at the Basic level," explains Windows chief Terry Myerson in a company blog post. "We are also providing a detailed summary of the data we collect from users at both Basic and Full levels of diagnostics." Microsoft is introducing better controls around its Windows 10 data collection levels in the latest Creators Update, which will start rolling out broadly next week. The controls allow users to switch between basic and full levels of data collection. "Our teams have also worked diligently since the Anniversary Update to re-assess what data is strictly necessary at the Basic level to keep Windows 10 devices up to date and secure," says Myerson. "As a result, we have reduced the number of events collected and reduced, by about half, the volume of data we collect at the Basic level."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Here’s how an otherwise humdrum virus sparks celiac disease