Saturday, 19 August 2017

Google countdown teases Android O reveal during solar eclipse

Google is capitalizing on the solar eclipse hype to tease out the latest version of Android. On Monday at 2:40 PM Eastern, the internet juggernaut says we'll "meet the next release of Android and all of its super (sweet) new powers." Given that the final beta shipped last month, this could mean a few possibilities: either the name will be revealed (my money is on Oreo), or we'll learn update's release day. After months of previews and non-final builds, at least we won't have much longer to wait for when we'll actually be able to download and install the new OS.

What does Android O entail? Decreased load times, better battery life and lot of onboard AI features like Google Lens (a visual search app), copy-less pasting and a picture-in-picture mode, among other things. None of them immediately call Nabisco's trademark cookie to mind, but hey, a guy can dream, right?

Source: Google



Read the full article here by Engadget

Here’s what happens to your retina if you view an eclipse without protection

reader comments 0

Americans are making their last dashes for glasses and viewers to watch the rare total solar eclipse that will glide across the continental US on Monday. Meanwhile, eye doctors are trying to clear away any orbiting debris that's obscuring vision safety information—and spotlight the dangers of unsafe viewing.

Everyone knows that watching an eclipse—or staring into the Sun in general—can damage eyes. But in a series of articles published Friday in JAMA and JAMA Ophthalmology, a group of ophthalmologists explains in detail how sunlight damages the retina, plus dispels some misconceptions about viewing techniques for the rare event. They also provide a case study of what happens when you go into an eclipse event eyeballs-out.

David Calkins and Paul Sternberg of The Vanderbilt Eye Institute in Nashville, Tennessee, (which will experience a total eclipse) wrote one of the pieces in JAMA Ophthalmology. In it, they point out that many people have the misconception that an eclipse allows safe viewing of the Sun—that the lunar disk will cover everything but the Sun’s beautiful corona. This is true for those lucky ones that are along the path of the total eclipse, albeit only briefly. For those in the totality path, the Sun’s core will be blotted out for no more than two minutes and 41 seconds. “However, for most people, at least some portion of the Sun’s core will be visible during the event,” Calkins and Sternberg note.

If total eclipse viewers take off their protection, it’s important to have it back in place before the moon begins sliding aside. Otherwise, you risk damage. Everyone else should be cautious the whole time.

Core sunlight is intense, at about 1,350 Watts per square meter irradiance, they note. With the refractive nature of our eyes, this leads to a hefty dose for the teeny 1.5mm fovea—the central pit of the retina. This pit is packed with photoreceptor cone cells responsible for color vision and is what gives us our sharp central vision.

Sunlight damages the eye in two distinct ways

Exposing your naked eye-ball to sunlight causes solar retinopathy (aka photic retinopathy or solar retinitis), which involves two types of damage to the retina, particularly the fovea: burns and photochemical toxicity.

The former may be the most easily understood. A big chunk of sunlight is near-infrared radiation (700 to 1,500 nanometers), which can cause heat and thus burns, Calkins and Sternberg note. Because our eyes lack pain receptors, we can scorch the fovea without even noticing as we gaze at the Sun.

But, the eye experts write, a "more pressing concern" is photochemical toxicity. The large wallop of visible light from the Sun gets soaked up by photoactive materials in the eye that are just itching to form free radicals and reactive oxygen species. These include heme proteins, melanosomes, lipofuscin, and other chemicals. Once generated, free radicals and reactive oxygen species can attack many types of molecules and break down membranes, leading to tissue damage and cell death. Once retina tissue is destroyed, it cannot regenerate.

The authors say that a study of 45 people who damaged their eyes during a 1999 eclipse in the UK supports the idea that photochemical toxicity is a frequent cause of vision damage in solar retinopathy. In those cases, most of the damage was not permanent. Only four of the 45 reported lingering symptoms—discomfort and vision problems—after seven months.

But, of course, irreversible damage is more likely with longer and larger exposures. A case report published by Ta C. Chang and Kara Cavuoto of the University of Miami’s Bascom Palmer Eye Institute detailed lasting injuries in a 12-year-old girl. She suffered severe vision damage after staring at the full Sun for about one full minute. The eye experts presented images of her damaged retina, one of which was obtained by optical coherence tomography (OCT), which is kind of like an ultrasound for the eye that uses light to create cross-sectional tissue images. The images show bright pitted spots (see white arrows) in her fovea where photoreceptor segments have been wiped out by Sun damage. On follow-up exams, her vision hadn’t gotten better.

nfrared (A) and macular optical coherence tomography (OCT) (B) images demonstrating a hyperreflective spot in the fovea (arrowheads), presumably representing disruption of the inner and outer photoreceptor segments without evidence of underlying retinal pigment epithelial defects, consistent with bilateral solar retinopathy.
Enlarge /

nfrared (A) and macular optical coherence tomography (OCT) (B) images demonstrating a hyperreflective spot in the fovea (arrowheads), presumably representing disruption of the inner and outer photoreceptor segments without evidence of underlying retinal pigment epithelial defects, consistent with bilateral solar retinopathy.

The JAMA Network

Here’s how to avoid such damage

In JAMA, Neil Bressler, of Johns Hopkins University School of Medicine, and colleagues, lay out how to safely view the eclipse.

First, what not to do: Don’t view the eclipse with your naked eye or unfiltered telescopes, binoculars, sunglasses (yeah, even if they’re really dark), camera lenses, or other optics devices. Don’t use anything that focuses light, even if you’re wearing eclipse glasses. NASA notes that the concentrated solar rays can damage the filter and enter your eyes.

To safely view the eclipse, turn to:

  • No. 14 Welders Glasses, available at welding supply stores.
  • Pinhole projectors.These allow you to see the eclipse by projecting sunlight through a pinhole onto a viewable surface (Note: it is not just looking through a pinhole). Here’s more information on these.
  • Aluminized Mylar filters/commercial glasses that do not have any damage or scratches. The American Astronomical Society says these should be verified by an accredited testing laboratory to meet the ISO 12312-2 international safety standard. Watch out, there are scams out there. The AAS has a handy list of reputable versions on their website.

If you do fear that your eyes have been damaged by sunlight, promptly see an eye doctor, who will be able to diagnose you with a clinical evaluation and diagnostic tests, such as OCT.

JAMA, 2017. DOI: 10.1001/jama.2017.9495 (About DOIs).

JAMA Ophthalmology, 2017. DOI: 10.1001/jamaophthalmol.2017.2936



Read the full article here by Ars Technica

Self-Driving Wheelchairs Debut in Hospitals and Airports


Autonomous vehicles can add a new member to their ranks—the self-driving wheelchair. This summer, two robotic wheelchairs made headlines: one at a Singaporean hospital and another at a Japanese airport.

The Singapore-MIT Alliance for Research and Technology, or SMART, developed the former, first deployed in Singapore’s Changi General Hospital in September 2016, where it successfully navigated the hospital’s hallways. It is the latest in a string of autonomous vehicles made by SMART, including a golf cart, electric taxi and, most recently, a scooter that zipped more than 100 MIT visitors around on tours in 2016.

The SMART self-driving wheelchair has been in development for about a year and a half, since January 2016, says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator in the SMART Future Urban Mobility research group. Today, SMART has two wheelchairs in Singapore and two wheelchairs at MIT being tested in a variety of settings, says Rus.

The robot’s computer uses data from three lidars to make a map.  A localization algorithm then determines where it is in the map. The chair’s six wheels lend stability, and the chair is designed to make tight turns and fit through normal-sized doorframes. “When we visited several retirement communities, we realized that the quality of life is dependent on mobility. We want to make it really easy for people to move around,” said Rus in a recent MIT statement.

A second autonomous wheelchair recently premiered at Haneda Airport in Tokyo, designed by Panasonic and Whill, Inc., creator of the Model A Whill wheelchair, a sleek, hi-tech wheelchair now on the market in Japan and the United States.

According to a recent press release, Panasonic is planning to conduct technical trials of the WHILL NEXT this year. Like the SMART wheelchair, the WHILL NEXT uses sensors to detect nearby obstacles. It also employs automation technology developed for Panasonic’s autonomous (and adorable) hospital delivery robot, HOSPI. The wheelchair identifies its position, selects routes, and moves to a chosen destination based on a user’s input into a smartphone app. It can even be hailed with the app – the Uber of wheelchairs.

The WHILL NEXT is also able to sync up with nearby wheelchairs to travel in a column, which is useful for a family or a group, the company notes. Best of all, each wheelchair automatically returns to its home base, reducing the need for airport staff to collect the chairs.

Beyond use in hospitals and airports, the SMART team says they envision a connected autonomous mobility system, where a user could use a scooter or wheelchair indoors at an office, zip outside and pick up a golf cart to cross the parking lot, and slip into an autonomous car to drive home. Recent studies with the scooter suggest the control algorithms work indoors as well as out, according to a press release last year. “The autonomous wheelchair could be very useful in any pedestrian environmen—including hospitals and airports —and we are exploring all these possibilities,” Rus tells IEEE Spectrum.

Yet the field faces the challenge of commercialization. Not all hi-tech wheelchairs have sold well, such as Dean Kamen’s stair-climbing iBot, whose $25,000 price tag was one reason the device was discontinued in 2009. But hopefully the next generation of wheelchairs won’t be as expensive, says Rus. “The system consists of an off-the-shelf wheelchair augmented with an autonomy package. We hope the price point of the autonomy package can come down to make the system affordable.”



Read the full article here by Computing: IEEE Spectrum

Friday, 18 August 2017

Push authentication can replace the password

For a vestige of the past, the password has managed to hold on and remain alive -- even though some of the top people in computing said that it had already died over a decade ago. In one of his more famous predictions, Microsoft founder Bill Gates said that passwords were on the way out already in 2004. Problem is that Gates, for all his wisdom, didn’t tell us what to use to replace passwords. "There is no doubt that over time, people are going to rely less and less on passwords," Gates said at the RSA conference in 2004. "People… [Continue Reading]


Read the full article here by Betanews

Wednesday, 16 August 2017

Intel’s self-driving supergroup brings Fiat Chrysler into the fold

Intel's self-driving supergroup has added another member to the team: Fiat Chrysler Automobiles (FCA). The partnership, which already includes BMW, Intel and Mobileye, has ambitious plans to get autonomous vehicles on the road by 2021, and FCA's inclusion is likely to make that goal more achievable.

FCA already has a decent chunk of autonomous expertise under its belt -- it partnered with Waymo (formerly Google) in 2016, adding 100 self-driving Pacifica minivans to the Waymo fleet -- but compared to its rivals the group lags behind. The new partnership gives the company access to the research and resources it needs to keep pace in the field.

But FCA brings something useful to the table, too. Scalability remains an issue for autonomous vehicles -- that is, making the technology work with different makes and models. FCA owns a number of very different brands, including Chrysler, Fiat and Jeep, so the partnership can now work towards autonomy across the board, from Jeeps used off-road to Fiats designed for zooming around cities.

The partnership aims to have 40 self-driving test vehicles on public roads by the end of 2017 -- a small step towards its grander plans to have Level 3 and Level 4/5 self-driving cars in production by 2021. FCA made a smart move getting involved at this stage -- no doubt other manufacturers will soon be looking for a piece of the action as well.

Via: The Verge



Read the full article here by Engadget

Monday, 14 August 2017

Bitcoin Has Passed $4,000

It was only last week that Bitcoin passed $3,000 for the first time, but now it's trading at over $4,000. Some say that the creation of Bitcoin Cash and the adoption of SegWit (which allows for more transactions) were responsible for the recent surge, having boosted confidence in the cryptocurrency's future. Wall Street's obsession with Bitcoin and its constant reporting may also be fueling the craze. So what happens next? No one knows. Bitcoin could crash 50% to $2,000 tomorrow or spike to $5,000 - and I don't think anyone who truly knows crypto would be surprised at either option. Everyone has a different opinion - some say the bubble is oversized and should have popped months ago - others think that bitcoin is currently just a fraction of what it could eventually trade at. Whichever camp you fall in, here's one friendly reminder: don't invest more than you can afford to lose - because if you ask anyone who's spent more than a few months in the cryptocurrency world they'll tell you it's a roller coaster. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Sunday, 13 August 2017

Intel EOLs Atom Chip Used for Microsoft HoloLens

Intel is discontinuing the processor used in Microsoft's HoloLens this Fall: customers have been asked to place their final orders on the Atom x5-Z8100P SoC by September 30, as final shipments will be made on October 30. The current-generation HoloLens was released last year with this very same Atom chip, and there is little word on what Microsoft will replace it with, being that the successor has been architected differently and tipped to run an AI co-processor. While the device will run Windows 10 and will be equipped with an HPU, it will also feature an AI co-processor integrated into the latter that will use neural networks to enable object and voice recognition skills without the need for an Internet connection. The HPU 2.0 with the programmable AI co-processor will be a self-sufficient device that will run on battery power of the next HoloLens (hence, its power consumption will not be too high). The HPU 2.0 and the AI co-processor were designed entirely in-house and therefore are tailored for usage model of the HoloLens. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Studio Ghibli reopens for Hayao Miyazaki's new film

Just a few years ago, Studio Ghibli's future was in the air after co-founder and legendary animator Hayao Miyazaki (supposedly) retired. The place is about to be jumping again, however, as the studio announced that it has re-opened to start production on a recently-announced new film by the not-so-retired Miyazaki. He was on hand for a small ceremony on July 3rd, where he "brought together his main collaborators already engaged on his new feature film to talk to them about the project," the company said in a news release (translated).

A re-opening normally wouldn't be newsworthy, but at one point, it seemed like Studio Ghibli -- behind masterpieces like Spirited Away, Princess Mononoke and Howl's Moving Castle -- would not produce any more films. Recently, however, Amazon announced that it would stream Ronja, the Robber's Daughter, a children's TV series directed by Miyazaki's son Goro. Shortly after that came the news that the studio would produce a new film directed by Hayao Miyazaki.

Studio Ghibli hasn't released any details about the film itself, but many fans think it will be an adaptation of Miyazaki's first CGI short film Boro the Caterpillar. That short was delayed, but producer Toshio Suzuki has said it will likely be released in 2019 ahead of the Tokyo 2020 Olympic games.

Miyazaki has threatened retirement before, and when he stepped down in 2013, the company announced it would restructure for the next generation of animators. The films, while beloved by animation aficionados and cineastes, have never made tons of money -- Studio Ghibli's best-grossing film was Spirited Away, which made $275 million back in 2001.

Over the years, however, the films have gained a much larger following thanks to streaming and DVD, so the new one will likely be met with unprecedented anticipation. Adding to that, Miyazaki will be 80 when it's completed, so this could definitely be his last film.

Via: Indie Wire

Source: Buta Connection (Facebook)



Read the full article here by Engadget

Friday, 11 August 2017

Mazda Will Bring the First Compression Ignition Gasoline Engine to Market


Diesel cars, no longer popular in Europe, are definitely a pariah in the US. Americans never warmed to them, and VW's scheme to dupe millions of customers and pollute the planet didn't help. But diesels provide better fuel economy than gasoline engines, even if they do emit more pollution. The ideal internal combustion engine, then, would combine the efficiency of a diesel with the (relatively) lower emissions of a gasoline engine.

Automotive engineers have spent decades trying to build just such an engine. Mazda just announced it's finally done it.

The Japanese automaker says the Skyactiv-X will be the world’s first commercially available compression ignition gasoline engine. I'll explain the tech in a moment, but the big takeaway is Mazda claims the engine is 20 to 30 percent more efficient than its current gas engines, and at least as efficient, if not more so, than its diesel engines.

This Skyactiv-X is part of Mazda's goofily titled "Sustainable Zoom-Zoom 2030" plan that includes a shift toward electric cars beginning in 2019. But Mazda knows EVs won't dominate the market anytime soon, and this engineering breakthrough suggests the auto industry isn't quite done improving internal combustion.

A New Kind of Boom

First, a primer for those of you who aren't petrolheads. Internal combustion engines, whether they burn gasoline or diesel fuel, generate power by compressing air in the cylinder, adding fuel, and detonating the mixture. That creates a small explosion, which forces a piston down, turning the crankshaft, and, through the drivetrain, the wheels. Gasoline engines use a spark plug to create the explosion. Diesel engines compress that air to a much greater degree, making the air inside the cylinder hot enough to detonate the fuel without a spark. That higher compression ratio means higher efficiency from the engine, or more miles from the fuel. Advantage: diesel.

In gas engines, the fuel is injected earlier, and the air is kept cooler with a lower compression ratio. That means everything is mixed better, resulting in a cleaner burn that produces fewer particulates (soot, basically) and less nitrogen oxide (which is linked to asthma and other respiratory problems). Advantage: gasoline.

Efficiency-minded automakers constantly strive to elevate the compression ratio in gas engines, but are limited by something called auto-ignition (aka knocking), which is when the heat causes the fuel to detonate at the wrong time. But engineers have also been working on embracing auto-ignition, which would allow them to run an engine at even higher compression, and get that increase in efficiency.

Such an engine is great in theory and usually works in a laboratory. General Motors, Honda, and Hyundai have even demonstrated variations on the theme in prototypes over the last decade. And Formula 1 cars use the technology.

But compression ignition is difficult to control outside of a lab, or the cost-no-object arena of F1 racing. Those explosions happen whenever the fuel and air mix reach a given temperature, so engineers sacrifice the millisecond-precise control of spark plugs. The engines are rough when cold, unpredictable when pushed hard, and too often noisy and rattly. Not exactly market-ready.

Mazda hasn't given full details on how it cracked this conundrum, but computers can help. A sophisticated system can control the temperature and pressure in each cylinder by varying turbo boost or valve timing. It can mix in exhaust gas to change the mixture of fuel and air, and calculate just the right amount of gas to throw in.

Plus, Mazda didn't completely abandon current technology. The Skyactiv-X uses something Mazda calls "spark controlled compression ignition," which means the engine has spark plugs and uses them when necessary, like when the engine is cold. It promises a seamless hand-off between sparking and spark-free driving modes. Just how well it works remains to be seen, but you'll be able to judge for yourself, when the first cars with the new engine go on sale in 2019.



Read the full article here by Wired Top Stories

Ethereum Ethminer Performance With Radeon & GeForce OpenCL - August 2017

Here are my latest Ethereum Ethminer benchmarks for those interested in mining this cryptocurrency using OpenCL on AMD Radeon and NVIDIA GeForce GPUs.

These are my latest Ethereum mining benchmarks that just finished up this morning. Radeon tests were done using the latest ROCm binaries on Ubuntu 16.04. The Radeon cards tested were the R9 290, RX 480, RX 560, RX 580, and R9 Fury. Yes, there will be Ethereum benchmarks on the Radeon RX Vega on launch day when that time comes. On the NVIDIA side was their 384.59 driver on Ubuntu 16.04 LTS with the GeForce GTX 960, GTX 970, GTX 980, GTX 980 Ti, GTX 1050, GTX 1060, GTX 1070, GTX 1080, and GTX 1080 Ti.

All the benchmarks were facilitated via the

Phoronix Test Suite

.

Additionally, the Phoronix Test Suite was monitoring GPU temperatures, AC system power consumption, and generating performance-per-Watt metrics.

Vega could quite possibly beat the GeForce GTX 1080 Ti with Ethminer given AMD's strong OpenCL performance with ROCm. It will be interesting to see later this month. Anyhow, this is the current state of things with the latest OpenCL Linux drivers.

Here were the GPU temperatures during testing... Of course, the GPU temperatures can vary a lot depending upon the AIB's cooling system and/or after-market cooling.

And the overall AC system power consumption during the Ethereum benchmarking process.

Very interestingly, the performance-per-Watt. This should be another interesting metric with Vega, but for now the GeForce GTX 1070 is leading when its comes to efficiency.

A Phoronix Test Suite module also provided the current performance-per-dollar for Ethereum GPU mining, but this is a bit of a mess considering the lack of availability currently for most Radeon Polaris graphics cards... Prices for both NVIDIA and AMD were based on selections at Amazon.

Stay tuned for more interesting tests ahead.



Read the full article here by Phoronix

Tuesday, 8 August 2017

Android 8.0 gets "streaming OS updates"

When you get that "out of space" error message during an update, you're only "out of space" on the user storage partition, which is just being used as a temporary download spot before the update is applied to the system partition. Starting with Android 8.0, the A/B system partition setup is being upgraded with a "streaming updates" feature. Update data will arrive from the Internet directly to the offline system partition, written block by block, in a ready-to-boot state. Instead of needing ~1GB of free space, Google will be bypassing user storage almost entirely, needing only ~100KB worth of free space for some metadata. I promise not to make some snide remark about Android's update mess.

Read the full article here by OSNews

(PR) Intel Unveils Full Intel Core X-series Processor Family Specs; 14- to 18-Core


Today, Intel is releasing the specifications for the 12- to 18-core processors: Intel Core i9-7920X, Intel Core i9-7940X, Intel Core i9-7960X and the Extreme Edition Intel Core i9-7980XE processors. Announced at Computex 2017, the Intel Core X-series processor family is the most powerful, scalable and accessible high-end desktop platform offered by Intel, designed to deliver the performance needed to meet extreme computing demands for virtual reality (VR), content creation, gaming and overclocking.

The new X-series processor family is the ultimate platform for content creators and gamers. Multitasking becomes extreme mega-tasking with simultaneous, compute-intensive, multithreaded workloads aligned in purpose, powered by up to 18 cores and 36 threads. And, with up to 68 PCIe 3.0 lanes on the platform, people have the ability to expand their systems with fast SSDs, up to four discrete GFX cards and ultrafast Thunderbolt 3 solutions.

Content creators can expect up to 20 percent better performance for VR content creation and up to 30 percent faster 4K video editing over the previous generation. This means less time waiting and more time designing new worlds and experiences. Gamers and enthusiasts will experience up to 30 percent faster extreme mega-tasking for gaming over the previous generation.

The 12-core Intel Core X-series processor will be available starting on Aug. 28, and 14- to 18-core Intel Core X-series processors will be available starting Sept. 25. The 4- to 10-core Intel Core X-series processors are already on shelves and available at multiple retailers, along with more than 200 Intel X299 Chipset motherboards.



Read the full article here by techPowerUp!

Monday, 7 August 2017

How to Screw Up Autonomous Cars

Holy crap! Turn any stop sign into a 45mph speed limit sign, as far as some driverless cars are concerned. Actually if you read through the article, this sort of thing will certainly have to be dealt with. I know no [H] readers would do this....well, strike that. This Car and Driver blog is worth a read. UW computer-security researcher Yoshi Kohno described an attack algorithm that uses printed images stuck on road signs. These images confuse the cameras on which most self-driving vehicles rely. In one example, explained in a document uploaded to the open-source scientific-paper site arXiv last week, small stickers attached to a standard stop sign caused a vision system to misidentify it as a Speed Limit 45 sign. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Saturday, 5 August 2017

Quake 2 With Realtime GPU Pathtracing

Edd Biddulph, computer graphics programmer has released a video showing his from-scratch GPU-based pathtracer created specifically for Quake 2. Despite running on a Titan Xp there is some noise, which is inevitable on today's hardware. By Biddulph's estimates, it will be another 15-20 years before GPU hardware is powerful enough to do real time path or ray tracing. I'll be honest, before seeing this and doing a little digging, I didn't know much about raytracing and pathtracing. Having learned a bit more this is quite incredible, and despite the YouTube compression, you can really see the impressive lighting effects coming from this 20 year old game. You can find the source code for this GPU pathtracer on GitHub This is a from-scratch GPU-based pathtracer created specifically for Quake 2. It has several optimisations which are only viable due to the typical characteristics of Quake 2 such as support for parallogram-shaped lightsources, BSP ray traversal, and special handling of sky 'surfaces' (portals). It doesn't officially have a name, but you can call it Raylgun. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Thursday, 3 August 2017

Functional World

Functional World

Read the full article here by turnoff.us - geek comic site

Epic Games Is Planning To Use Vulkan By Default For Unreal Engine On Linux

Epic Games has an interesting goal of eventually being able to use the Vulkan graphics API by default on Linux systems running the Unreal Engine.

Epic developers are still working on improving the Vulkan renderer and tweaking it for better performance and on visual/feature parity to Direct3D 11. With the upcoming Unreal Engine 4.17 release will be more Vulkan improvements.

While Windows gamers will continue seeing Direct3D used by default, Epic Games is hoping to see their Vulkan renderer eventually be the default when running this game engine on Linux systems, but for now OpenGL is the default.

This was one of the interesting tidbits of information during yesterday's

Khronos SIGGRAPH 2017 event

. The Vulkan/OpenGL recording is embedded below.

VIDEO



Read the full article here by Phoronix

The Drummer John Bonham

What Makes John Bonham Such a Good Drummer?..(Read...)



Read the full article here by Likecool

Wednesday, 2 August 2017

How to Write, Package and Distribute a Library in Python


Python is a great programming language, but packaging is one of its weakest points. It is a well-known fact in the community. Installing, importing, using and creating packages has improved a lot over the years, but it's still not on par with newer languages like Go and Rust that learned a lot from the struggles of Python and other mature languages. 

In this tutorial, you'll learn everything you need to know about writing, packaging and distributing your own packages. 

How to Write a Python Library

A Python library is a coherent collection of Python modules that is organized as a Python package. In general, that means that all modules live under the same directory and that this directory is on the Python search path. 

Let's quickly write a little Python 3 package and illustrate all these concepts.

The Pathology Package

Python 3 has an excellent Path object, which is a huge improvement over Python 2's awkward os.path module. But it's missing one crucial capability—finding the path of the current script. This is very important when you want to locate access files relative to the current script. 

In many cases, the script can be installed in any location, so you can't use absolute paths, and the working directory can be set to any value, so you can't use a relative path. If you want to access a file in a sub-directory or parent directory, you must be able to figure out the current script directory. 

Here is how you do it in Python:

To access a file called 'file.txt' in a 'data' sub-directory of the current script's directory, you can use the following code: print(open(str(script_dir/'data/file.txt').read())

With the pathology package, you have a built-in script_dir method, and you use it like this:

Yep, it's a mouthful. The pathology package is very simple. It derives its own Path class from pathlib's Path and adds a static script_dir() that always returns the path of the calling script. 

Here is the implementation:

Due to the cross-platform implementation of pathlib.Path, you can derive directly from it and must derive from a specific sub-class (PosixPath or WindowsPath). The script dir resolution uses the inspect module to find the caller and then its filename attribute.

Testing the Pathology Package

Whenever you write something that is more than a throwaway script, you should test it. The pathology module is no exception. Here are the tests using the standard unit test framework: 

The Python Path

Python packages must be installed somewhere on the Python search path to be imported by Python modules. The Python search path is a list of directories and is always available in sys.path. Here is my current sys.path:

Note that the first empty line of the output represents the current directory, so you can import modules from the current working directory, whatever it is. You can directly add or remove directories to/from sys.path. 

You can also define a PYTHONPATH environment variable, and there a few other ways to control it. The standard site-packages is included by default, and this is where packages you install using via pip go. 

How to Package a Python Library

Now that we have our code and tests, let's package it all into a proper library. Python provides an easy way via the setup module. You create a file called setup.py in your package's root directory. Then, to create a source distribution, you run: python setup.py sdist

To create a binary distribution called a wheel, you run: python setup.py bdist_wheel

Here is the setup.py file of the pathology package:

It includes a lot of metadata in addition to the 'packages' item that uses the find_packages() function imported from setuptools to find sub-packages.

Let's build a source distribution:

The warning is because I used a non-standard README.md file. It's safe to ignore. The result is a tar-gzipped file under the dist directory:

And here is a binary distribution:

The pathology package contains only pure Python modules, so a universal package can be built. If your package includes C extensions, you'll have to build a separate wheel for each platform:

For a deeper dive into the topic of packaging Python libraries, check out How to Write Your Own Python Packages.

How to Distribute a Python Package

Python has a central package repository called PyPI (Python Packages Index). When you install a Python package using pip, it will download the package from PyPI (unless you specify a different repository). To distribute our pathology package, we need to upload it to PyPI and provide some extra metadata PyPI requires. The steps are:

  • Create an account on PyPI (just once).
  • Register your package.
  • Upload your package.

Create an Account

You can create an account on the PyPI website. Then create a .pypirc file in your home directory:

For testing purposes, you can add a "pypitest" index server to your .pypirc file:

Register Your Package

If this is the first release of your package, you need to register it with PyPI. Use the register command of setup.py. It will ask you for your password. Note that I point it to the test repository here:

Upload Your Package

Now that the package is registered, we can upload it. I recommend using twine, which is more secure. Install it as usual using pip install twine. Then upload your package using twine and provide your password (redacted below):

For a deeper dive into the topic of distributing your packages, check out How to Share Your Python Packages.

Conclusion

In this tutorial, we went through the fully fledged process of writing a Python library, packaging it, and distributing it through PyPI. At this point, you should have all the tools to write and share your libraries with the world.

Additionally, don’t hesitate to see what we have available for sale and for study in the marketplace, and please ask any questions and provide your valuable feedback using the feed below.



Read the full article here by Nettuts+

Tuesday, 1 August 2017

Microsoft Won't Patch 20-Yr-Old SMBv1 Vulnerability (You Should Just Turn the Service Off)

An anonymous reader shares a news post: Following the recent WannaCry and Petya ransomware attacks, Microsoft recommended all Windows 10 users to remove the unused but vulnerable SMBv1 file sharing protocol from their PCs. This is because both variants of the ransomware actually used the same SMBv1 exploit to replicate through network systems, even though it seems that Petya mostly affected Windows PCs in Ukraine. Anyway, if you haven't turned off the protocol on the PC already, you really should: Not only because new WannaCry/Petya variants could once again use the same vulnerability again to encrypt your files, but because another 20-year-old flaw has just been unveiled during the recent DEF CON hacker conference. The SMB security flaw called "SMBLoris" was discovered by security researchers at RiskSense, who explained that it can lead to DoS attacks affecting every version of the SMB protocol and all versions of Windows since Windows 2000. More importantly, a Raspberry Pi and just 20 lines of Python code are enough to put a Windows server to its knees.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Monday, 31 July 2017

Russian censorship law bans proxies and VPNs

It's going to be much harder to view the full web in Russia before the year is out. President Putin has signed a law that, as of November 1st, bans technology which lets you access banned websites, including virtual private networks and proxies. Internet providers will have to block websites hosting these tools. The measure is ostensibly meant to curb extremist content, but that's just pretext -- this is really about preventing Russians from seeing content that might be critical of Putin, not to mention communicating in secret.

Accordingly, the President has signed another law requiring that chat apps identify users through their phone numbers after January 1st, 2018. Some messaging clients already encourage you to attach an account to a phone number, but this makes it mandatory -- Facebook and others can't reject the idea if they're prefer to give you some kind of anonymity. The measure also demands that operators limit users' access if they're spreading illegal material.

The timing likely isn't coincidental. Russia is holding a presidential election in March, and banning technology like VPNs will make it harder for voters to see news that questions Putin's authority. Likewise, you may be less likely to organize a protest if you know that the police can trace anonymous chats back to you through your phone number. As with China's VPN crackdown, Russian officials are trying to control the online conversation at a crucial moment to make sure the powers that be go unchallenged.

Via: RadioFreeEurope, Reuters

Source: Gov.ru (translated 1), (2)



Read the full article here by Engadget

Sunday, 30 July 2017

F1 | Orgoglio Vettel: “Ricordiamoci dove eravamo 12 mesi fa”

Sebastian Vettel ha conquistato la pole position del Gran Premio d’Ungheria, precedendo il compagno di squadra Kimi Raikkonen. Su un...

Read the full article here by FormulaPassion.it

F1 | Di Resta: “Fantastico guidare le auto migliori al mondo”

Di Resta, attuale pilota del DTM e commentatore per la tv inglese in questo GP di Ungheria – ha preso il...

Read the full article here by FormulaPassion.it

Friday, 21 July 2017

Four short links: 21 July 2017

Offline First, Security Tools, Learning Game Strategy, and Design Documentation

  1. Offline First -- how to build an offline-first site in Javascript.
  2. Blackhat Arsenal -- software being released or updated during the Blackhat Arsenal event (e.g., DefPloreX, a machine-learning toolkit for large-scale e-crime forensics; and CBM, the "Car Backdoor Maker").
  3. Learning Macromanagement in StarCraft from Replays using Deep Learning -- Neural networks are trained on 789,571 state-action pairs extracted from 2,005 replays of highly skilled players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting the next build action. By integrating the trained network into UAlbertaBot, an open source StarCraft bot, the system can significantly outperform the game’s built-in Terran bot and play competitively against UAlbertaBot with a fixed rush strategy. (via Mark Riedl)
  4. Making Engineering Team Communication Clearer, Faster, Better -- it’s very important to make sure you have a process that actually gets people to read the document. The write-only document fired off into the void is a common problem, and this talks about how to solve it (for design documents, but the principles translate).

Continue reading Four short links: 21 July 2017.



Read the full article here by Four Short Links - O'Reilly Media

Wednesday, 19 July 2017

Avast Now Owns CCleaner After Acquiring Piriform

An anonymous reader writes: Security firm Avast has acquired software firm Piriform. Not only does the acquired company make CCleaner, but many other solid programs too. In fact, the rest of Piriform's library -- Recuva, Speccy, and Defraggler -- are staples of the Windows freeware community. "CCleaner is a leading brand in the market, used by 130 million people, including 15 million Android users. CCleaner has an extensive and extremely loyal community of tech-savvy users, who need to speed up and optimize their PC and Android experience. Avast will maintain the CCleaner brand of products along with Avast's existing performance optimization products, Avast Cleanup and AVG Tune Up. With the addition of CCleaner, Avast has dramatically expanded its product offerings in the PC and smartphone optimization market reaching customers around the world who demand faster performance," says Avast. Vince Steckler, CEO of Avast explains, "We see many commonalities between CCleaner and Avast, allowing for great new products for our user bases. Avast and CCleaner are the top two downloaded products on popular download sites. They are both known by advanced users as focused on performance, so we believe there will be a great interest from our CCleaner customers in using Avast security products and vice versa. In today's connected world, it's all about speed and high performance, and with Piriform's robust technology we can address this need perfectly. We look forward to working with the Piriform team to grow the business together."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Shut the HAL Up

Posted by Jeff Vander Stoep, Senior Software Engineer, Android Security

Updates are essential for security, but they can be difficult and expensive for device manufacturers. Project Treble is making updates easier by separating the underlying vendor implementation from the core Android framework. This modularization allows platform and vendor-provided components to be updated independently of each other. While easier and faster updates are awesome, Treble's increased modularity is also designed to improve security.

Isolating HALs

A Hardware Abstraction Layer (HAL) provides an interface between device-agnostic code and device-specific hardware implementations. HALs are commonly packaged as shared libraries loaded directly into the process that requires hardware interaction. Security boundaries are enforced at the process level. Therefore, loading the HAL into a process means that the HAL is running in the same security context as the process it's loaded into.

The traditional method of running HALs in-process means that the process needs all the permissions required by each in-process HAL, including direct access to kernel drivers. Likewise, all HALs in a process have access to the same set of permissions as the rest of the process, including permissions required by other in-process HALs. This results in over-privileged processes and HALs that have access to permissions and hardware that they shouldn't.

Figure 1. Traditional method of multiple HALs in one process.

Moving HALs into their own processes better adheres to the principle of least privilege. This provides two distinct advantages:

  1. Each HAL runs in its own sandbox and is permitted access to only the hardware driver it controls and the permissions granted to the process are limited to the permissions required to do its job.
  2. Similarly, the process loses access to hardware drivers and other permissions and capabilities needed by the HALs.
Figure 2. Each HAL runs in its own process.

Moving HALs into their own processes is great for security, but it comes at the cost of increased IPC overhead between the client process and the HAL. Improvements to the binder driver made IPC between HALs and clients practical. Introducing scatter-gather into binder improves the performance of each transaction by removing the need for the serialization/deserialization steps and reducing the number of copy operations performed on data from three down to one. Android O also introduces binder domains to provide separate communication streams for vendor and platform components. Apps and the Android frameworks continue to use /dev/binder, but vendor-provided components now use /dev/vndbinder. Communication between the platform and vendor components must use /dev/hwbinder. Other means of IPC between platform and vendor are disallowed.

Case study: System Server

Many of the services offered to apps by the core Android OS are provided by the system server. As Android has grown, so has system server's responsibilities and permissions, making it an attractive target for an attacker. As part of project Treble, approximately 20 HALs were moved out of system server, including the HALs for sensors, GPS, fingerprint, Wi-Fi, and more. Previously, a compromise in any of those HALs would gain privileged system permissions, but in Android O, permissions are restricted to the subset needed by the specific HAL.

Case study: media frameworks

Efforts to harden the media stack in Android Nougat continued in Android O. In Nougat, mediaserver was split into multiple components to better adhere to the principle of least privilege, with audio hardware access restricted to audioserver, camera hardware access restricted to cameraserver, and so on. In Android O, most direct hardware access has been entirely removed from the media frameworks. For example HALs for audio, camera, and DRM have been moved out of audioserver, cameraserver, and drmserver respectively.

Reducing and isolating the attack surface of the kernel

The Linux kernel is the primary enforcer of the security model on Android. Attempts to escape sandboxing mechanisms often involve attacking the kernel. An analysis of kernel vulnerabilities on Android showed that they overwhelmingly occurred in and were reached through hardware drivers.

De-privileging system server and the media frameworks is important because they interact directly with installed apps. Removing direct access to hardware drivers makes bugs difficult to reach and adds another layer of defense to Android's security model.



Read the full article here by Android Developers Blog

Tuesday, 18 July 2017

Udoo X86 Power On / Off via BLE on Arduino 101

I finally received my UDOO X86, I backed on Kickstarter on April 2016!

To give a little of context, it is a board designed by UDOO sporting an Intel x86 CPU (64bit, different depending if you have a BASIC, ADVANCED or ULTRA), 2/4/8GB or RAM (soldered) and a LOT of peripherals and I/O.
To mention some of them:
  • M.2 Key E port
  • Gigabit Ethernet
  • 3x USB 3 Ports
  • 1 HDMI and 2x DisplayPort
  • GPIO
  • 2x COM ports
  • RTC
  • I2C

In addition, they managed to put on the same board a Genuino/Arduino 101 board (Intel Curie), with BLE and 6 axis accelerometer & gyroscope.
Everything on a board more or less twice the size of the RaspberryPi.

More details can be checked on the official documentation page on the UDOO site.

I am writing this tutorial from the board itself and I can tell it can be used as desktop/HTPC without any major issues.

One cool feature of the Arduino 101 is it is able to Power On/Off the main CPU via Interrupt!

I am going to tell you how to be able to remotely power on and off the main CPU via BLE.

I am using LUbuntu 16.04 as OS, but those instructions should work pretty well on any operating system, even MS Windows.

Grant Power management to Intel Curie


Verify the BIOS/EFI settings are as shown in this page.

Prepare the Arduino environment


Install the Arduino IDE from the official site and install the Intel Curie Arduino 101 Libraries & Tools. You could do the same with PlatformIO, but I am not quite sure about the Intel Curie support.

Check you select the correct board type, programmer and serial port in the Tools menu.


Remember to give your user the rights to write to the serial port executing:

sudo usermod -a -G dialout

You have to logout to make those changes taken into account.

In addition to that, it could be necessary to execute the following script:

wget -O - https://raw.githubusercontent.com/01org/corelibs-arduino101/master/scripts/create_dfu_udev_rule | sudo bash

This is required to make the Arduino capable of resetting the board to put it in programming mode.

Check everything works fine uploading a simple example sketch (pick the Basic/Blink one).

The blinking led is the one close to the HDMI port

Upload the sketch to receive commands from BLE

Let's start with a skeleton sketch Intel prepared for us.

Choose the example from CurieBLE/Peripheral/CallbackLED.

Upload it to the board.

Follow the instructions shown on this page to configure the Android application.

All you need to do is to pair to the Arduino101 BLE device, which will be shown as LEDCB.
Once you're connected, you'll be able to write 0 or 1 to a variable exposed by the service implemented in the sketch.
The variable will power on and off the LED depending on the value of the variable.

Adapt the sketch to send a Power signal to the main board


We need to add a simple function to send a LOW digital signal for 20ms on the PIN number 9, followed by a HIGH digital level.
This generated wave will be caught by an intermediary SMT32, which will send the power interrupt to the Intel Atom/Celeron/Pentium.



The whole board will react on the interrupt, including the Arduino 101, which will be restarted, losing any unsaved state.
It means everything which follows the execution of the function sendPowerSignal is actually useless, as the Arduino 101 will reboot.

Issues (2 May 2017)


I am unable to fully verify it is working since on shutdown the board actually gets powered on.
Possible root cause:

  • BLE is interacting with the PIN number 9
  • When the Arduino 101 is reset, the PIN 9 goes again low triggering a power on

Those facts seem not impacting just my board, as I found a forum post you can check here where another user faced the same problem.

I can also confirm that if I suspend the system, the Arduino 101 is no more visible on Bluetooth, so it means it gets powered off.

UPDATE (3 May 2017)!

I've been in touch with UDOO team and they provided a revision of the BIOS for testing purposes only.

The problem is partially fixed and I'll keep you posted to let you know when we will have this example fully working.

UPDATE (18 July 2017)!

I've received a BIOS 1.03 RC version to test if the mentioned issues are fixed.

And we have good news! Everything works as expected!

I let UDOO send the changelog of the latest BIOS as soon as it is officially out, but I can confirm you'll be able to use the Arduino 101 to properly power up and shutdown the main board.

The power management via Intel Curie can be now left enabled and flashing the Arduino 101 or rebooting will no more generate issues (reboots).

The problem has been solved making the power signal trigger sequence more complex (5 HIGH to LOW transitions within 100 ms, instead of a single transition).

If you're using a desktop linux distribution (e.g. Ubuntu), please check the power management rules and set "When power button is pressed" to "Shutdown" or "Suspend", depending on what you prefer).

Otherwise, you will get a dialog, as in the following video... 





Monday, 17 July 2017

IBM's new mainframe keeps everything encrypted, all the time

Data breaches are bad enough by themselves, but they're made worse when companies don't bother to (or can't) encrypt all their info. It's tantamount to giving hackers the keys to the kingdom. But what to do? IBM thinks it has a simple solution: encrypt absolutely everything. Its latest Z mainframe system now has enough power to automatically encrypt all the data associated with an app or service, whether it's in transit or sitting idle in a database. According to IBM, conventional systems based on x86 processors only encrypt "limited slices" of information, while the new Z has enough power (18 times more, in fact) to lock everything down as a matter of course.

The new platform can also "self-destruct" encryption keys the moment there's any tampering, and quickly encrypts the programming interfaces that tie apps and services together. And the Z is faster overall, regardless of the task at hand: there's three times the memory (a hefty 32TB), three times faster input/output and much lower lag when talking to a storage area network. The hardware should juggle many more transactions (like purchases at an online store) even as it boosts their overall security.

Naturally, there are catches to this utopian vision of internet security. While IBM plays an important role in online commerce (its transaction engine handles 87 percent of credit card payments), many companies would likely have to upgrade before you'd see the benefit. Any mass migration would probably take years, if it happens at all. And of course, IBM's solution doesn't stop hackers from taking data -- it just prevents them from reading that data. There's a chance, however slim, that the most determined perpetrators will find a way to decrypt content once they have it.

Nonetheless, this technology could be a big step forward if it lives up to IBM's hype. If you can assume that your sensitive data is always encrypted, you won't have to worry quite so much that thieves might go on a shopping spree with your credit card. This might also deter crooks if they know that they're unlikely to hit the jackpot when they crack a company's security.

Source: IBM



Read the full article here by Engadget

* Android is a dead end *

Dieter Bohn at The Verge: So while Microsoft didn't do itself any favors, I'd argue strongly that all these machinations and flailings weren't a response (or weren't only a response) to the iPhone. The real enemy was the company that had set its sights on Microsoft's phone ambitions since before the iPhone was released. That company was Google, of course, and it only tangentially wanted to take on the iPhone. Google's real target was always Microsoft, and it hit the bullseye. This article looks at the past, so let me take this opportunity to posit something that might come as a surprise to some. Android is a dead end. I really want to write a far more detailed and in-depth article explaining why I think Android is a dead end, but I can't yet fully articulate my thoughts or pinpoint why, exactly, I've felt like this for months now. All this doesn't mean Google is going to get out of mobile operating systems, and it doesn't even mean that the name "Android" is going away. All it means is that what we think of today as "Android" - a Linux kernel with libraries, the Android Runtime, and so on on top - has served its hackjob, we-need-to-compete purpose and is going to go away. Android in its current form suffers from several key architectural problems - it's not nearly as resource-efficient as, say, iOS, has consistent update problems, and despite hefty hardware, still suffers from the occasional performance problems, among other things - that Google clearly hasn't been able to solve. It feels like Android is in limbo, waiting for something, as if Google is working on something else that will eventually succeed Android. Is that something Fuchsia? Is Project Treble part of the plan, to make it easier for Google to eventually replace Android's Linux base with something else? If Android as it exists today was salvageable, why are some of the world's greatest operating systems engineers employed by Google not working on Android, but on Fuchsia? If Fuchsia is just a research operating system, why did its developers recently add actual wallpapers to the repository? Why does every design choice for Fuchsia seem specifically designed for and targeted at solving Android's core problems? I don't like making broad predictions based on gut feelings and spidey senses, since they can be incredibly misleading and hard to read, but I'm still pretty confident on this one: over the coming two to three years, Android will undergo a radical transformation. This transformation will be mostly transparent to users - their next Android phone won't actually be "Android" anymore, but still run the same applications, and they literally won't care - but it won't be a Linux device, and it won't suffer from Android's core problems. In a few years, Google's Pixel phone will have a fully custom, Google-designed SoC, and run an operating system that is Android in brand name only. Bookmark this. Read more on this exclusive OSNews article...

Read the full article here by OSNews

Friday, 14 July 2017

Gran Turismo Sport has a new, final October 2017 release date

Gran Turismo Sport, the seventh mainline entry in the franchise, is now scheduled to be released Oct. 17 on PlayStation 4, developer Polyphony Digital announced today.

That’s the timing in North America; the simulation racing game will debut Oct. 18 across Europe. The launch happens to be two weeks after the arrival of Gran Turismo Sport’s direct competition — Microsoft’s Forza Motorsport 7 is set to launch Oct. 3 on Windows 10 and Xbox One. Polyphony Digital’s announcement was accompanied by a new 30-second trailer, which you can watch above.

It’s been a long, winding road for Gran Turismo Sport to get this point. The new mid-October release date is more than 11 months after the game was originally scheduled to arrive — publisher Sony Interactive Entertainment said in May 2016 that it was going to launch the game on Nov. 15, 2016. Polyphony Digital then announced in August that it had decided to push the game back to 2017.

Gran Turismo Sport cover artwork Polyphony Digital/Sony Interactive Entertainment

The developer had also planned to hold an open beta for Gran Turismo Sport in early 2016. However, the company canceled it in May of that year. Instead, the studio ran a closed beta, but not until this past March.

“We are now entering the final stages of polishing as we prepare for launch,” said series creator and Polyphony Digital president Kazunori Yamauchi in a PlayStation Blog post. He added that the studio is finally getting ready to provide details on Gran Turismo Sport’s features, including its campaign mode, vehicles and courses. As previously announced, the game will also support the PlayStation VR headset with a virtual reality mode.

For now, though, Sony is just saying that pre-orders are live at the PlayStation Store and European retailers. Gran Turismo Sport will be available in the usual $59.99 Standard Edition and a previously announced $69.99 Limited Edition, which comes with the following items:

  • GT Sport Starter Pack (8 Cars)
  • $1MM In-Game Credit
  • Livery Sticker Pack
  • 30 GT Sport Themed Avatars
  • Chrome Racing Helmet (For Driver Profile)
  • Limited Steelbook & Packaging

Sony is also offering a Digital Deluxe Edition for $79.99. It beefs up the Starter Pack with four more cars, and doubles the in-game credit and the number of avatars.



Read the full article here by Polygon

F1 announces social media partnership with Snapchat before British GP

Formula 1 and Snap Inc, the social media company which owns social media platform Snapchat, have announced a new partnership to cover grands prix.

Lewis Hamilton was criticised for using Snapchat – the mobile and tablet application which allows users to share photos and videos to their own ‘Story’ and view their contacts’ ‘Stories’ – during the 2016 Japanese GP Thursday press conference, for which he later apologised.

Companies, media outlets and major events also curate stories, which group those photos and videos over 24 hours, after which they are deleted.

F1 is focusing on the ‘Discover’ part of the app, which compiles photos and videos – ‘Snaps’ – submitted by users and collates them under a collective story during an event. A team of editors and producers at Snapchat will review the photos before they are viewable to F1’s audience on the app.

The deal is F1’s first commercial deal with a digital and mobile-orientated platform and it will begin at the British Grand Prix on Sunday, continuing in Singapore, Japan, the USA, Mexico, Brazil and Abu Dhabi.  The collaboration with Snapchat is part of F1’s ongoing push to engage fans with social media and digital platforms, including its mobile (Official F1) app, its website with live timing.

Notably, Liberty has relaxed its rules for teams and drivers on sharing video content, with clips pre-season testing shared on Twitter and Facebook marking a change from the motorsport’s previous era of management.

Frank Arthofer, Head of Digital and New Business at F1, said: “This is the first step towards expanding our social media strategy.

“Right from the start, we have said we want to work with partners to bring fans closer to the amazing show that is Formula 1, an incredible mix of technology and individual talent – and Snap fits that bill.

“We need to continue to bring new fans to the sport – by reaching out to them on social media platforms with behind the scenes, fun and engaging content.

“Snap’s platform is one of the most popular among ‘millennials,’ a sector we are particularly keen on attracting, as it represents the future of our sport.”

The app has more than 166 million daily active users around the world and more than 3 billion Snaps are sent every day. The platform also supports advertisers and partners with other sports leagues, outlets and broadcasters such as Sky Sports, NASCAR, EA Sports, Fox Sports and MotoGP.

“We want to work with the most iconic sporting organizations in the world, that are beloved by our global community of passionate and highly engaged fans – and working with Formula 1 has been at the top of our wishlist for a long time.”, said Ben Schwerin, Snap Inc’s Vice President of Partnerships.

“Our goal is simple – we want to make being a fan more fun no matter where you are and Snapchat offers unique and creative ways to experience it with their closest friends.

“We are honoured to be the first platform they are partnering with, and are excited to start offer their teams, partners and advertisers an opportunity to reach millions of Snapchatters around the globe.”

Will you be using Snapchat to engage with the upcoming grands prix? Have your say in the comment section below.



Read the full article here by James Allen on F1 - The official website