Saturday, 11 June 2016

Tiny Variscite DART joins growing list of Brillo-ready boards

Variscite’s tiny, i.MX6 UL based DART-6UL module now supports Brillo, adding to the momentum growing behind Google’s Android-based, IoT-focused OS.

Variscite announced its DART-6UL in December, as a follow-on to similarly tiny DART-MX6. The 50 x 25mm computer-on-module, which ships with Yocto Project Linux support, now supports Google’s lightweight, Android-based Brillo operating system as well. It’s one of several boards and embedded gizmos that support the open source, IoT-focused distro, with more on the way (see farther below).

Read more

read more



Read the full article here by tuxmachines.org

Valve rolls out free VR creation tool for new destinations workshop

Friday, 10 June 2016

You can’t defeat what’s already defeated.image / twitter /...











You can’t defeat what’s already defeated.

image / twitter / facebook / patreon



Read the full article here by http://owlturd.com/

Android-x86 6.0-RC1 Released With OpenGL ES Via Mesa, Linux 4.4.12

After months of

Android-x86 6.0 test releases

, the Android-x86 "Marshmallow" release is up to the release candidate phase.

Android-x86 6.0-RC1 was announced and it contains support for 32-bit and 64-bit kernel/user-space, now runs off the Linux 4.4.12 kernel, OpenGL ES 3.x support via the R600g/RadeonSI/Nouveau/VMware/VirGL drivers, OpenGL ES fall-back support via LLVMpipe, support for UEFI booting, support for various file-systems, and support for many other new features that were previously unavailable via Android-x86.

Those interested in learning more about Android-x86 6.0-RC1 are encouraged to visit

Android-x86.org

.



Read the full article here by Phoronix

Man Page

For even more info, see blarbl(2)(3) and birb(3ahhaha I'm kidding, just Google it like a normal person.

Read the full article here by xkcd.com

Lenovo shows off a bendable phone for your wrist

At Lenovo's big keynote in San Francisco today, CTO Peter Hortensius introduced a new bendable screen that snaps around the wrist like an old-school slap bracelet. It's the company's way of moving into a hingeless device era.

Screenwriter and producer Meghan McCarthy of My Little Pony: Friendship Is Magic fame, planted in the audience, got up to demonstrate the new technology. She wrapped a seemingly active phone screen onto her wrist. There was an audible cracking sound, but the screen snapped on like a bulky wristband. Next she moved on to a folding tablet with cat faces. She bent it in half and put the flexible screen next to her ear, pretending to make a call.

Hortensius then did a quick walk-through of the bendable-screen technology. "What else has to bend as the screen bends?" he said. A flexible screen needs equally flexible parts, including the battery and the motherboard, which need to fold and unfold along with it. It's an innovative idea, but that's all it is for now. The flexible phone isn't a fully functional product yet.

Lenovo promised that the concept device will be on display after the keynote. Watch this space for more details on the folding screens.

Get all the news from today's Lenovo and Motorola event right here!



Read the full article here by Engadget

Meet Motorola’s new flagship, the modular Moto Z

The Moto Z. The Z Force looks just like this, but a bit thicker.

7 more images in gallery

Today, Motorola took the wraps off its 2016 flagship, the Moto Z. These phones (there are two of them) are markedly different from past Moto flagships, which is appropriate since the Moto Z represents Motorola's transition from "A Google Company" to Lenovo subsidiary. The actual deal closed over a year ago, but product pipelines mean only now are we seeing Lenovo's full influence.

The Moto Z is a metal phone the company is calling "the world's thinnest premium smartphone." It has a 5.5-inch, 1440p AMOLED display, a Snapdragon 820, 4GB of RAM, and 32 or 64GB of storage with a MicroSD slot. For cameras, there's a 13MP rear sensor with OIS and laser autofocus and a 5MP front camera with a wide angle lens. Other extras include a fingerprint sensor, "water-repellant coating," and a front flash.

Lenovorola hasn't said how big the battery on the Moto Z is, only that it will get "30 hours" of battery life.

Read 4 remaining paragraphs | Comments



Read the full article here by Ars Technica

Thursday, 9 June 2016

Web Audio API: Why Compose When You Can Code?

The very first draft for a web audio API appeared in W3C during 2011. Although audio in webpages has been supported for a long time, a proper way of generating audio from the web browser was not available until quite recently. I personally attribute this to Google Chrome, because as for the interest of Google, the browser started becoming the most important part of a computer. You may recall, the realm of the web browser didn’t start to change much until Google Chrome appeared. If you used sound in a webpage in this time, it would have been a poor design decision. But since the idea of web experiments appeared, web audio started to make sense again. Web browsers nowadays are another tool for artistic expression, and video and audio in the web browser plays a vital role in it.

Web Audio API: Why Compose When You Can Code?

Web Audio API: Why Compose When You Can Code?

Web Audio API can be quite hard to use for some purposes, as it is still under development, but a number of JavaScript libraries already exist to make things easier. In this case I am going to show you how to get started with the Web Audio API using a library called Tone.js. With this, you will be able to cover most of your browser sound needs from only learning the basics.

Hello Web Audio API

Getting Started

We will begin without using the library. Our first experiment is going to involve making three sine waves. As this will be a simple example, we will create just one file named hello.html, a bare HTML file with a small amount of markup.

<!DOCTYPE html>
 <html>
   <head>
<meta charset="utf‐8">
<title> Hello web audio </title> </head>
<body>
   </body>
   <script>
   </script>
</html>

The core of Web Audio API is the audio context. The audio context is an object that will contain everything related to web audio. It’s not considered a good practice to have more than one audio context in a single project. We will begin by instantiating an audio context following the recommendations given by Mozilla’s Web Audio API documentation.

var audioCtx = new (window.AudioContext || window.webkitAudioContext);

Making an Oscillator

With an audio context instantiated, you already have an audio component: the audioCtx.destination. This is like your speaker. To make a sound, you have to connect it to audioCtx.destination. Now to produce some sound, let’s create an oscillator:

var sine = audioCtx.createOscillator();

Great, but not enough. It also needs to be started and connected to our audioCtx.destination:

sine.start();
sine.connect(audioCtx.destination);

With these four lines, you will have a pretty annoying webpage that plays a sine sound, but now you understand how modules can connect with one another. In the following script, there will be three sine shaped tone, connected to the output, each with a different tone. The code is very self-explanatory:

//create the context for the web audio
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
//create, tune, start and connect each oscillator sinea, sineb and sinec
var sinea = audioCtx.createOscillator();
sinea.frequency.value = 440;
sinea.type = "sine";
sinea.start();
sinea.connect(audioCtx.destination);
var sineb = audioCtx.createOscillator();
sineb.frequency.value = 523.25;
sineb.type = "sine";
sineb.start();
sineb.connect(audioCtx.destination);
var sinec = audioCtx.createOscillator();
sinec.frequency.value = 698.46;
sinec.type = "sine";
sinec.start();
sinec.connect(audioCtx.destination);

Oscillators are not restricted to sine waves, but also can be triangles, sawtooth, square and custom shaped, as stated in the MDN.

Patching Logic of Web Audio

Next, we will add a gain module to our orchestra of Web Audio components. This module allows us to change the amplitude of our sounds. It is akin to a volume knob. We have already used the connect function to connect an oscillator to the audio output. We can use the same connect function to connect any audio component. If you are using Firefox, and you take a look at the web audio console, you will see the following:

If we want to change the volume, our patch should look like:

Which means that the oscillators are no longer connected to the audio destination, but instead to a Gain module, and that gain module is connected to the destination. It’s good to always imagine that you do this with guitar pedals and cables. The code will look like this:

var audioCtx = new (window.AudioContext || window.webkitAudioContext
// we create the gain module, named as volume, and connect it to our
var volume = audioCtx.createGain();
volume.connect(audioCtx.destination);
//these sines are the same, exept for the last connect statement.
//Now they are connected to the volume gain module and not to the au
var sinea = audioCtx.createOscillator();
sinea.frequency.value = 440;
sinea.type = "sine";
sinea.start();
sinea.connect(volume);
var sineb = audioCtx.createOscillator();
sineb.frequency.value = 523.25;
sineb.type = "sine";
sineb.start();
sineb.connect(volume);
var sinec = audioCtx.createOscillator();
sinec.frequency.value = 698.46;
sinec.type = "sine";
sinec.start();
sinec.connect(volume);
volume.gain.value=0.2;

You can find the solution at http://ift.tt/1UDLirk.

GainNode is the most basic effect unit, but there is also a delay, a convolver, a biquadratic filter, a stereo panner, a wave shaper, and many others. You can grab new effects from libraries such as Tone.js.

Storing one of these sound patches in objects of their own will allow you to reuse them as needed, and create more complex orchestrations with less code. This could be a topic for a future post.

Making Things Easier with Tone.js

Now that we have taken a brief look how the vanilla Web Audio modules work, let us take a look at the awesome Web Audio framework: Tone.js. With this (and NexusUI for user interface components), we can very easily build more interesting synths and sounds. To try things out, let us make a sampler and apply some user interactive effects to it, and then we will add some simple controls for this sample.

Tone.js Sampler

We can start by creating a simple project structure:

simpleSampler
|-- js
    |-- nexusUI.js
    |-- Tone.js
|-- noisecollector_hit4.wav
|-- sampler.html

Our JavaScript libraries will reside in the js directory. For the purposes of this demo, we can use NoiseCollector’s hit4.wav file that can be downloaded from Freesound.org.

Tone.js provides its functionalities through Player objects. The basic capability of the object is to load a sample, and to play it either in a loop or once. Our first step here is to create a player object in a “sampler” var, inside the sampler.html file:

<!doctype html>
<html>
    <head>
        <title> Sampler </title>
        <script type="text/javascript" src="js/nexusUI.js" >
        <script type="text/javascript" src="js/Tone.js" ></script>
        <script>
            var sampler = new Tone.Player("noisecollector_hit4.wav”, function() {
                console.log("samples loaded");
            });
        </script>
    </head>
    <body>
    </body>
</html>

Note that the first parameter of the player constructor is the name of the WAV file, and the second is a callback function. WAV is not the only supported file type, and the compatibility depends on the web browser more than the library. The callback function will run when the player has finished loading the sample into its buffer.

We also have to connect our sampler to the output. The Tone.js way of doing this is:

sampler.toMaster();

… where sampler is a Tone.Player object, after line 10. The toMaster function is shorthand for connect(Tone.Master).

If you open your web browser with the developer console open, you should see the “samples loaded” message, indicating that the player was created correctly. At this point you may want to hear the sample. To do that, we need to add a button to the webpage, and program it to play the sample once pressed. We are going to use a NexusUI button in the body:

<canvas nx="button"></canvas>

You should now see a rounded button being rendered in the document. To program it to play our sample, we add a NexusUI listener, which looks like this:

button1.on('*',function(data) {
    console.log("button pressed!");
})

Something outstanding about NexusUI is that it creates a global variable for each NexusUI element. You can set NexusUI to not do that, and instead have these variables only in nx.widgets[] by setting nx.globalWidgets to false. Here we are going to create just a couple of elements, so we’ll just stick to this behaviour.

Same as in jQuery, we can put these .on events, and the first argument will be the event name. Here we are just assigning a function to whatever is done to the button. This whatever is written as “*”. You can learn more about events for each element in the NexusUI API. To play the sample instead of logging messages when we press the button, we should run the start function of our sampler.

nx.onload = function() {
    button1.on('*',function(data) {
    console.log("button pressed!");
        sampler.start();
    });
}

Also notice that the listener goes inside an onload callback. NexusUI elements are drawn in canvas, and you can’t refer to them until nx calls the onload function. Just as you would do with DOM elements in jQuery.

The event is triggered on mouse down and on release. If you want it to be triggered only on press, you have to evaluate whether event.press equals one.

With this, you should have a button that plays the sample on each press. If you set sampler.retrigger to true, it will allow you to play the sample regardless of whether it is playing or not. Otherwise, you have to wait until the sample finishes to retrigger it.

Applying Effects

With Tone.js, we can easily create a delay:

var delay= new Tone.FeedbackDelay("16n",0.5).toMaster();

The first argument is the delay time, which can be written in musical notation as shown here. The second is the wet level, which means the mixture between the original sound and the sound that has an effect on it. For delays you don’t usually want a 100% wet, because delays are interesting with respect to the original sound, and the wet alone is not very appealing as both together.

The next step is to unplug our sampler from master and plug it instead to the delay. Tweak the line where sampler is connected to master:

sampler.connect(delay);

Now try the button again and see the difference.

Next, we will add two dials to the body of our document:

<canvas nx="dial"></canvas>
<canvas nx="dial"></canvas>

And we apply the dials’ values to the delay effect using the NexusUIlistener:

dial1.on('*',function(data) {
    delay.delayTime.value=data.value;
})
dial2.on('*',function(data) {
    delay.feedback.value=data.value;
})

The parameters that you can tweak on each event can be found in Tone.js documentations. For delay, it is here. Now you are ready to try the example and tweak the delay parameters with the NexusUI dials. This process can be easily done with each NexusUI element, not limited only to effects. For instance, also try adding another dial, and adding its listener as follows:

dial3.on('*',function(data) {
    sampler.playbackRate=data.value;
})

You can find these files at http://ift.tt/1UDL57t

Conclusion

When I went through these APIs, I started feeling overwhelmed by all the possibilities and ideas that started to come to my mind. The big difference between this implementation of audio and the traditional implementations of digital audio is not in the audio itself, but in the context. There are no new methods of making synthesis here. Rather the innovation is that audio and music making are now meeting web technologies.

I am personally involved in electronic music making, and this area has always had this paradox of the ambiguity between actually performing music and just pressing play to a recorded track. If you want to really make live electronic music, you must be able to create your own performative tools or “music-making robots” for live improvisation. But if the performance of electronic music becomes simply tweaking parameters in pre-prepared music making algorithms, then the audience can also be involved in this process. I have been working on little experiments regarding this integration of web and audio for crowdsourced music, and perhaps soon we will be attending parties where the music comes from the audience through their smartphones. After all, it’s not that different from rhythmic jams we might have enjoyed in the cave ages.

About the author

Joaquín Aldunate, Finland

member since February 21, 2016

Joaquín is a new media artist and developer. He is an expert in user interaction design and is familiar with many development environments. He has done concept prototyping and can fully develop web user interfaces with HTML, JavaScript, CSS, and occasionally PHP. He has experience working as part of a team composed of many different backgrounds so he can adapt and contribute in a number of ways to get the job done.

[click to continue...]



Read the full article here by Toptal Engineering Blog

Whatever happened to ousted Ferrari F1 boss Marco Mattiacci? Here’s the answer

Marco Mattiacci had a brief career as team principal of the Ferrari F1 team in 2014, stepping in to replace Stefano Domenicali before being replaced himself eight months later by Maurizio Arrivabene. But not before he had overseen the Scuderia’s divorce with Fernando Alonso and signed Sebastan Vettel to replace him. Mattiacci, who famously introduced […]

Read the full article here by James Allen on F1 - The official website

Wednesday, 8 June 2016

Finally there's an open source drop-in replacement for MS Office

Personally, I'll be sticking with the one-two punch of Google Docs and LibreOffice...but I have no need to work with MS Office (as my editor now works with LibreOffice as well). However, it's reassuring to know, should I have to open an MS Office password protected file (or a file that LibreOffice doesn't care for), I now have an option.

Read more

read more



Read the full article here by tuxmachines.org

Tuesday, 7 June 2016

404 Not Found - A Coloring Book by The Oatmeal

404 Not Found - A Coloring Book by The Oatmeal

I wrote a coloring book about a missing robot.

View

Read the full article here by The Oatmeal - Comics, Quizzes, & Stories

Neoglyphic brings dynamic fur and fibre to Unreal Engine 4

Neoglyphic brings dynamic fur and fibre to Unreal Engine 4

US-based firm Neoglyphic Entertainment has launched its NeoFur technology for Epic Games’ widely-used engine.

Unreal Engine 4 users will now be able to use the new tool to create objects and characters with real-time fur and fibre for any project, whether console, mobile or VR.

The tech uses shell-rendering techniques to create the illusion of millions of strands of fur, complete with realistic physics and new level-of-detail systems that are designed to ensure furs are rendered to a high-quality.

NeoFur was actually one of the five projects backed in the recent wave of Unreal Dev Grants.

“Films like Disney’s Zootopia have shown us the beauty that fur can bring to a production, but at a high cost,” said technical art director Carlos Montero. “With NeoFur, studios of any type can afford high-end technology that dramatically enhances the look of their environments and characters. 

“There’s no reason a studio needs to spend valuable time and money to achieve believable looking fur, as NeoFur’s solution saves time and costs while providing breathtaking results.”

You can find out more about the technology and licensing options at http://ift.tt/1r2L85Z.



Read the full article here by Develop Feed

Huawei manager says it's making another Nexus device

It's now been seven months since Huawei began selling its first (very well made) Nexus smartphone and consumers are already looking at what Google has planned for its 2016 refresh. The search giant typically keeps its manufacturing partners close to its chest, so it's fallen on the Chinese company to start teasing a future device on its behalf. In an interview with Gear Burn, Charlene Munilall, General Manager of its South African consumer business group, has suggested that Huawei is building another Nexus device, telling the gadget site "we're doing the Nexus again this year, by the way."

Huawei was one of two Nexus smartphone partners last year, with LG supplying the Nexus 5X. Google has said it will do the same thing this time around to cater for fans who like the option of a core Nexus handset or something more high end. Huawei could be placed to fill one of those spots, but early rumors have suggested that HTC is on board to make both Nexus smartphones, which are reported to be codenamed Marlin and Sailfish.

If that is the case, there is also the possibility that we'll see a Huawei-made tablet. Google may want to follow up the release of the Pixel C -- the first tablet that it designed completely in-house -- with another Surface-like slate, and Huawei could be the one to provide it. That is if Huawei's regional executive isn't speaking out of turn, of course. We're still a few months away from a potential Google event but we've reached out to the companies involved to clarify the manager's statement.

Via: Pocket Lint

Source: Gear Burn



Read the full article here by Engadget

Sunday, 5 June 2016

Maintaining Java ORM Code

Tse-Hsun Chen, Weiyi Shang, Jinqiu Yang, Ahmed E. Hassan, and Michael W. Godfrey: "An Empirical Study on the Practice of Maintaining Object-Relational Mapping Code in Java Systems". MSR'16, May 2016, http://ift.tt/210MCtb.

From the abstract:

Despite the advantages of using ORM [object-relational mapping] frameworks, we observe several difficulties in maintaining ORM code (i.e., code that makes use of ORM frameworks) when cooperating with our industrial partner. After conducting studies on other open source systems, we find that such difficulties are common in other Java systems. Our study finds that i) ORM cannot completely encapsulate database accesses in objects or abstract the underlying database technology, thus making ORM code changes more scattered; ii) ORM code changes are more frequent than regular code, but there is a lack of tools that help developers verify ORM code at compilation time; iii) changes to ORM code are more commonly due to performance or security reasons; however, traditional static code analyzers need to be extended to capture the peculiarities of ORM code in order to detect such problems. Our study highlights the hidden maintenance costs of using ORM frameworks, and provides some initial insights about potential approaches to help maintain ORM code. Future studies should carefully examine ORM code, in particular given the rising use of ORM in modern software systems.

I wasn't surprised to find that ORMs can't completely encapsulate database access, but I was surprised to learn that the result is scattered code changes. I was even more surprised to find that changes to ORM code are more common than changes to "regular" code, but that's one of the reasons to read research: to find out where personal experience is atypical of general practice. And I hope the authors of various IDEs take the third point to heart: PyCharm's ability to debug Django templates saved me days of tedious debugging (at least), and similar support for debugging ORM code might have saved me even more.



Read the full article here by It Will Never Work in Theory

Saturday, 4 June 2016

Python/Unix Hybrid Demoed at PyCon

A new shell "combines the Python language with features of Bash Unix and the fish and zsh shells," according to InfoWorld. An anonymous reader writes: Pronounced "conch," but spelled Xonsh, it runs on Linux, Windows, and Mac OS X systems, bringing Python libraries to the command line -- for example, the ability to use regular expressions when globbing files. "The first thing you'll notice about Xonsh is that it's really meant to be used as a general-purpose shell," the lead developer explained in a presentation at PyCon. "But on the other hand, it really is Python, so you can do things like add two numbers together." They're describing it as "a Python-ish, BASHwards-looking shell language and command prompt...a superset of Python 3.4+ with additional support for the best parts of shells that you are used to, such as Bash, zsh, fish, and IPython...the superglue that bonds Python to a command-line interface and other shells."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Facebook shoves mobile web users toward the Messenger app

Remember when Facebook divorced instant messages from the main app and forced folks to download Messenger if they wanted a direct way to send quick missives while on the go? Well, the social network is doing something similar once again. Reports are floating about that people accessing their messages from the mobile web either isn't possible or that there are a few annoying obstacles in place.

Android Authority writes that some mobile web users are getting a reminder that "Your conversations are moving to Messenger" with a near instantaneous redirect to the Google Play download page for Zuckerberg's blue chat application. In our tests, it's been hit or miss. I can still access everything from Chrome on iOS without any hiccups, but Android-owning coworkers experienced a couple of hurdles to get to their existing message threads -- including a few bounces to Messenger. Zuckerberg and Co. are cranking the nag factor way up here for Android users.

Facebook's motivation is pretty clear: It's putting an awful lot of effort into Messenger and wants everyone to use it, whether the users care to or not. After all, there have been hints at ads making their way to the chat app. If there are ways to circumvent seeing those, people are going to use them. Until they're gone, that is.

Source: Android Authority



Read the full article here by Engadget

Friday, 3 June 2016

How GPUs Are Delivering a 10,000x Improvement in Missile-Tracking Systems

Getting a 2-3x performance boost with an upgraded GPU is a matter of delight. Applying GPU technology to get a 10,000x performance improvement in a missile-tracking phased array radar system is a matter of national security. Yet that's exactly what the real-time systems and software engineering firm FishEye Software has done. In a talk at our recent GPU Technology Conference, Ted Selig, chief operating officer at the Boston-based company, described how GPU acceleration helped overcome the limitations of conventional systems. By doing so, FishEye has shown a 10,000-fold gain in price-performance with a system that operates at real-time speeds, tracking scores of missiles. Comments

Read the full article here by [H]ardOCP News/Article Feed

Thursday, 2 June 2016

Vysor allows you to mirror multiple Android devices

Google developer, Koush, announced an update to his popular device mirroring app Vysor on Thursday that will enable users to share multiple android screens to a PC, Linux or Mac simultaneously and then grant remote access to the device farm. The new "Share All" feature works just like the original Vysor Share: you plug the Android into the computer via a USB cable, install the visor app and activate it to mirror the Android screen to the computer. However, Share All takes that process a step further by allowing the user to link multiple handsets to a computer and then use it as a shared server which multiple people can remotely access.

This could prove a boon for developers. You'd be able to centralize all your various test devices to a single computer and share access to the entire array with the rest of the team.

Source: Google+



Read the full article here by Engadget

Wednesday, 1 June 2016

Van Rossum promises Python 3.6 will move to GitHub

The new version of the language will have improved readability and cryptography and could do away with the multicore-impeding GIL With the planned Python 3.6 release, the popular scripting language is due for improvements in readability and cryptography. Also on the agenda: Moving the project to GitHub to help contributors participate in the language's development.



Read the full article here by Cryptography News

Map Age Guide

Does the screeching chill your blood and herald death? If yes, banshee. If no, seagull.

Read the full article here by xkcd.com