Thursday, 31 August 2017

LogDevice: a distributed data store for logs

A log is the simplest way to record an ordered sequence of immutable records and store them reliably. Build a data intensive distributed service and chances are you will need a log or two somewhere. At Facebook, we build a lot of big distributed services that store and process data. Want to connect two stages of a data processing pipeline without having to worry about flow control or data loss? Have one stage write into a log and the other read from it. Maintaining an index on a large distributed database? Have the indexing service read the update log to apply all the changes in the right order. Got a sequence of work items to be executed in a specific order a week later? Write them into a log, have the consumer lag a week. Dream of distributed transactions? A log with enough capacity to order all your writes makes them possible. Durability concerns? Use a write-ahead log.

At Facebook's scale, all of that is much easier said than done. The log abstraction carries with it two important promises that are difficult to fulfill at scale: highly available and durable record storage, and a repeatable total order on those records. LogDevice attempts to deliver on those two promises, so dear to the heart of a distributed system designer, at an essentially unlimited scale. It is a distributed data store designed specifically for logs.

The log can be viewed as a record-oriented, append-only, and trimmable file. Let's look at what this means in more detail:

  • Record-oriented means that data is written into the log in indivisible records, rather than individual bytes. More importantly, a record is the smallest unit of addressing: A reader always starts reading from a particular record (or from the next record to be appended to the log) and receives data one or more records at a time. Still more importantly, record numbering is not guaranteed to be continuous. There may be gaps in the numbering sequence, and the writer does not know in advance what log sequence number (LSN) its record will be assigned upon a successful write. Because LogDevice is not bound by the continuous byte numbering requirement it can offer better write availability in the presence of failures.
  • Logs are naturally append-only. No support for modifying existing records is necessary or provided.
  • Logs are expected to live for a relatively long time — days, months, or even years — before they are deleted. The primary space reclamation mechanism for logs is trimming, or dropping the oldest records according to either a time- or space-based retention policy.

The relaxed data model of LogDevice allowed us to reach more optimal points in the trade-off space of availability, durability, and performance than what would be possible for a distributed file system strictly adhering to the POSIX semantics, or for a log store built on top of such a file system.

Workload and performance requirements

Facebook has a variety of logging workloads with highly variable performance, availability, and latency requirements. We designed LogDevice to be tunable for all those conflicting objectives, rather than a one-size-fits-all solution.

What we found common to most of our logging applications is the requirement of high write availability. The loggers just don't have anywhere to park their data, even for a few minutes. LogDevice must be there for them, available. The durability requirement is also universal. As in any file system no one wants to hear that their data was lost after they received an acknowledgement of a successful append to a log. Hardware failures are no excuse. Finally, we discovered that while most of the time log records are read just a few times, and very soon after they are appended to a log, our clients occasionally perform massive backfills. A backfill is a challenging access pattern where a client of LogDevice starts at least one reader per log for records that are hours or even days old. Those readers then proceed to read everything in every log from that point on. The backfills are usually precipitated by failures in the downstream systems that consume log records containing state updates or events. A backfill allows the downstream system to rebuild the state that was lost.

It's also important to be able to cope with spikes in the write load on individual logs. A LogDevice cluster typically hosts thousands to hundreds of thousands of logs. We found that on some of our clusters the write rate on a few logs may see a 10x or higher spike over the steady state, while the write rate on the majority of the logs handled by that LogDevice cluster does not change. LogDevice separates record sequencing from record storage, and uses non-deterministic placement of records to improve write availability and better tolerate temporary load imbalances caused by such spikes.

Consistency guarantees

The consistency guarantees provided by a LogDevice log are what one would expect from a file, albeit a record-oriented one. Multiple writers can append records to the same log concurrently. All those records will be delivered to all readers of the log in the same order, namely the order of their LSNs, with repeatable read consistency. If a record was delivered to one reader, it will also be delivered to all readers encountering that LSN, barring unlikely catastrophic failures that result in the loss of all copies of the record. LogDevice provides built-in data loss detection and reporting. Should data loss occur, the LSNs of all records that were lost will be reported to every reader that attempts to read the affected log and range of LSNs.

No ordering guarantees are provided for records of different logs. The LSNs of records from different logs are not comparable.

Design and implementation

Non-deterministic record placement

It's good to have options. Having a large number of placement options for record copies improves write availability in a distributed storage cluster. Similar to many other distributed storage systems, LogDevice achieves durability by storing several identical copies of each record (typically two or three) on different machines. With many placement options for those copies you can complete writes even if a lot of storage nodes in your cluster are down or slow, as long as the part of the cluster that is up can still handle the load. You can also accommodate spikes in the write rate on a single log by spreading the writes over all the nodes available. Conversely, if a particular log or record is restricted to just a few specific nodes, the maximum throughput of a single log will be limited by the capacity of those nodes, and the failure of just a few nodes may cause all writes on some logs to fail.

The principle of maximizing placement options for incoming data is employed by many successful distributed file systems. In Apache HDFS for instance a data block can be placed on any storage node in the cluster, subject to the cross-rack and space constraints enforced by the centralized metadata repository called the name node. In Red Hat Ceph data placement is controlled by a multi-valued hash function. The values produced by the hash function provide multiple placement options for an incoming data item. This eliminates the need for a name node but cannot quite reach the same level of placement flexibility.

LogDevice, with its focus on log storage, takes a different approach to record placement. It provides the level of placement flexibility equivalent to that offered by a name node without actually requiring a name node. Here is how this is accomplished. First, we decouple the ordering of records in a log from the actual storage of record copies. For each log in a LogDevice cluster LogDevice runs a sequencer object whose sole job is to issue monotonically increasing sequence numbers as records are appended to that log. The sequencer may run wherever it is convenient: on a storage node, or on a node reserved for sequencing and append execution that does no storage.

Figure 1. The separation of sequencing and storage in LogDevice.

Once a record is stamped with a sequence number, the copies of that record may potentially be stored on any storage node in the cluster. Their placement will not affect the repeatable read property of the log as long as readers can efficiently find and retrieve the copies.

A client that wishes to read a particular log contacts all storage nodes that are permitted to store records of that log. That set, called the node set of the log is usually kept smaller than the total number of storage nodes in the cluster. The node set is a part of the log's replication policy. It can be changed at any time, with an appropriate note in the log's metadata history, which the readers consult in order to find the storage nodes to connect to. Node sets allow LogDevice clusters to scale independently from the number of readers. The nodes contacted by the client deliver record copies to it by pushing them into TCP connections as fast as they can. Naturally, the header of each record includes its sequence number. The LogDevice client library performs the reordering and occasional de-duplication of records necessary to ensure that the records are delivered to the reader application in the order of their LSNs.

While this placement and delivery scheme is great for write availability and handling spiky write workloads, it would not be very efficient for file workloads that often include many point reads. For log reading workloads, which are largely sequential, it is quite efficient. All storage nodes contacted by a reader will likely have some records to deliver. No IO and network resources are wasted. We ensure that only one copy of every record is read from disk and delivered over the network by including the copy set in the header of every record copy. A simple server-side filtering scheme based on copy sets coupled with a dense copy set index guarantees that in steady state only one node in the copy set would read and delivery a copy of the record to a particular reader.

Sequence numbers

As shown in Figure 1, the sequence numbers of records in LogDevice are not integers, but pairs of integers. The first component of the pair is called the epoch number, the second one is offset within epoch. The usual tuple comparison rules apply. The use of epochs in LSNs is another availability optimization. When a sequencer node crashes or otherwise becomes unavailable LogDevice must bring up replacement sequencer objects for all the affected logs. The LSNs that each new sequencer starts to issue must be strictly greater than the LSNs of all records already written for that log. Epochs allow LogDevice to guarantee this without actually looking at what has been stored. When a new sequencer comes up it receives a new epoch number from the metadata component called the epoch store. The epoch store acts as a repository of durable counters, one per log, that are seldom incremented and are guaranteed to never regress. Today we use Apache Zookeeper as the epoch store for LogDevice.

Many-to-many rebuilding

Drives fail. Power supplies fail. Rack switches fail. As these failures occur the number of available copies decreases for some or all records. When after several consecutive failures that number drops to zero we have data loss or at least the loss of read availability of some records. Both are bad outcomes that LogDevice tries to avoid as much as possible. Rebuilding creates more copies for records that have become under-replicated (have fewer than the target number of copies R) after one or more failures.

In order to be effective, rebuilding has to be fast. It must complete before the next failure takes out the last copy of some unlucky record. Similar to HDFS the rebuilding implemented by LogDevice is many-to-many. All storage nodes act as both donors and recipients of record copies. Marshaling resources of the entire cluster for rebuilding allows LogDevice to fully restore the replication factor of all records affected by the failure at a rate of 5-10GB per second.

Rebuilding coordination is fully distributed and is performed over an internal metadata log that we call the event log.

The local log store

The separation of sequencing and storage helps allocate the aggregate CPU and storage resources of the cluster to match the changing, sometimes spiky workload. However, the per-node efficiency of a distributed data store is largely determined by its local storage layer. In the end multiple record copies must be saved on non-volatile devices, such as hard drives or SSDs. RAM-only storage is impractical when storing hours worth of records at 100MBps+ per node. When backlog duration is measured in days (not an uncommon requirement at Facebook) hard drives are far more cost efficient than flash. This is why we designed the local storage component of LogDevice to perform well not only on flash with its huge IOPS capacity, but also on hard drives. Commodity HDDs can push a respectable amount of sequential writes and reads (100-200MBps), but top out at 100-140 random IOPS.

We called the local log store of LogDevice LogsDB. It is a write-optimized data store designed to keep the number of disk seeks small and controlled, and the write and read IO patterns on the storage device mostly sequential. As their name implies write-optimized data stores aim to provide great performance when writing data, even if it belongs to multiple files or logs. The write performance is achieved at the expense of worse read efficiency on some access patterns. In addition to performing well on HDDs LogsDB is particularly efficient for log tailing workloads, a common pattern of log access where records are delivered to readers soon after they are written. The records are never read again, except in rare emergencies: those massive backfills. The reads are then mostly served from RAM, making the reduced read efficiency of a single log irrelevant.

LogsDB is a layer on top of RocksDB, an ordered durable key-value data store based on LSM trees. LogsDB is a time-ordered collection of RocksDB column families, which are full-fledged RocksDB instances sharing a common write-ahead log. Each RocksDB instance is called a LogsDB partition. All new writes for all logs, be it one log or a million, go into the most recent partition, which orders them by (log id, LSN), and saves on disk in a sequence of large sorted immutable files, called SST files. This makes the write IO workload on the drive mostly sequential, but creates the need to merge data from multiple files (up to the maximum allowed number of files in a LogsDB partition, typically about 10) when reading records. Reading from multiple files may lead to read amplification, or wasting some read IO.

LogsDB controls read amplification in a way uniquely suited for the log data model with its immutable records identified by immutable LSNs monotonically increasing with time. Instead of controlling the number of sorted files by compacting (merge-sorting) them into a bigger sorted run LogsDB simply leaves the partition alone once it reaches its maximum number of SST files, and creates a new most recent partition. Because partitions are read sequentially, at no time the number of files to read concurrently will exceed the maximum number of files in a single partition, even if the total number of SST files in all partitions reaches tens of thousands. Space reclamation is performed efficiently by deleting (or in some cases infrequently compacting) the oldest partition.

Use cases and future work

LogDevice has become a versatile solution for a variety of logging workloads at Facebook. The following are just a few examples. Scribe is one of the larger users of LogDevice by total throughput, exceeding at peak a terabyte per second of ingest, delivered reliably and with the possibility of replay. Scribe provides a fire-and-forget write API with delivery latency expectations on the order of seconds. LogDevice clusters running Scribe are tuned for per-box efficiency, rather than low end-to-end latency or append latency. Maintaining secondary indexes on TAO data is another important use case for LogDevice. Here the throughput is not as large as with Scribe, but strict record ordering per log is important, and the expected end-to-end latency is on the order of 10ms. This required very different tuning. Another interesting example is machine learning pipelines, which use LogDevice for delivering identical event streams to multiple ML model training services.

LogDevice is under active development. It is written in C++ and has few external dependencies. New areas that we are currently exploring include disaggregated clusters, where storage and CPU-intensive tasks are handled by servers with different hardware profiles, support for very high volume logs, and efficient server-side filtering of records by an application-supplied key. Together these features will improve the hardware efficiency of LogDevice clusters, and will provide a scalable load distribution mechanism for the consumers of high throughput data streams.

We continue to iterate on LogDevice with the ultimate goal of contributing it to the open source community later in 2017.



Read the full article here by Facebook Code

LinkedIn announces open source tool to keep Kafka clusters running

LinkedIn announces open source tool to keep Kafka clusters running

Aug 30, 2017, 05:00 (0 Talkback[s]) (Other stories by Ron Miller)

At The Kafka Summit in San Francisco, LinkedIn announced a new load balancing tool called Cruise Control, which has been developed to help keep Kafka clusters up and running. The company developed Kafka, an open source message streaming tool to help make it easier to move massive amounts of data around a network from application to application. It has become so essential today that LinkedIn has dedicated 1800 servers moving over 2 trillion transactions per day through Kafka

Complete Story

Related Stories:



Read the full article here by Linux Today

How To Disable Intel ME Courtesy of the NSA

I know there are a lot of folks that do not like Intel Management Engine and whatever they think it might do on their computers. The simple fact is though that it is a needed component for years now. Positive Technologies has laid out everything you need to keep enough of Intel ME around to initialize and launch the Intel processor, and then shut down the process once that has all taken place. And it seems that the HAP NSA program might be responsible for this all. If you are willing to get your hands a bit dirty, you can now turn Intel ME off once booted.

News Image

Closing thoughts - So we have found an undocumented PCH strap that can be used to switch on a special mode disabling the main Intel ME functionality at an early stage. We can prove this by the following facts: Binary analysis of Intel ME firmware, as described in this paper. If we remove some critical ME modules and enable HAP mode, Intel ME does not crash. This proves that HAP disables ME at an early stage. We are quite sure that Intel ME is unable to exit this mode because we have not found code capable of doing so in the RBE, KERNEL, and SYSLIB modules.

Discussion



Read the full article here by [H]ardOCP News/Article Feed

For Sharing:Long-ways | Box-ways









For Sharing:

Long-ways | Box-ways



Read the full article here by http://owlturd.com/

Hardening the Kernel in Android Oreo

Posted by Sami Tolvanen, Senior Software Engineer, Android Security

The hardening of Android's userspace has increasingly made the underlying Linux kernel a more attractive target to attackers. As a result, more than a third of Android security bugs were found in the kernel last year. In Android 8.0 (Oreo), significant effort has gone into hardening the kernel to reduce the number and impact of security bugs.

Android Nougat worked to protect the kernel by isolating it from userspace processes with the addition of SELinux ioctl filtering and requiring seccomp-bpf support, which allows apps to filter access to available system calls when processing untrusted input. Android 8.0 focuses on kernel self-protection with four security-hardening features backported from upstream Linux to all Android kernels supported in devices that first ship with this release.

Hardened usercopy

Usercopy functions are used by the kernel to transfer data from user space to kernel space memory and back again. Since 2014, missing or invalid bounds checking has caused about 45% of Android's kernel vulnerabilities. Hardened usercopy adds bounds checking to usercopy functions, which helps developers spot misuse and fix bugs in their code. Also, if obscure driver bugs slip through, hardening these functions prevents the exploitation of such bugs.

This feature was introduced in the upstream kernel version 4.8, and we have backported it to Android kernels 3.18 and above.

int buggy_driver_function(void __user *src, size_t size)
{
    /* potential size_t overflow (don’t do this) */
    u8 *buf = kmalloc(size * N, GPF_KERNEL);
    …
    /* results in buf smaller than size, and a heap overflow */
    if (copy_from_user(buf, src, size))
    return -EFAULT;

    /* never reached with CONFIG_HARDENED_USERCOPY=y */
}

An example of a security issue that hardened usercopy prevents.

Privileged Access Never (PAN) emulation

While hardened usercopy functions help find and mitigate security issues, they can only help if developers actually use them. Currently, all kernel code, including drivers, can access user space memory directly, which can lead to various security issues.

To mitigate this, CPU vendors have introduced features such as Supervisor Mode Access Prevention (SMAP) in x86 and Privileged Access Never (PAN) in ARM v8.1. These features prevent the kernel from accessing user space directly and ensure developers go through usercopy functions. Unfortunately, these hardware features are not yet widely available in devices that most Android users have today.

Upstream Linux introduced software emulation for PAN in kernel version 4.3 for ARM and 4.10 in ARM64. We have backported both features to Android kernels starting from 3.18.

Together with hardened usercopy, PAN emulation has helped find and fix bugs in four kernel drivers in Pixel devices.

int buggy_driver_copy_data(struct mydata *src, void __user *ptr)
{
    /* failure to keep track of user space pointers */
    struct mydata *dst = (struct mydata *)ptr;
    …
    /* read/write from/to an arbitrary user space memory location */
    dst->field = … ;    /* use copy_(from|to)_user instead! */
    …
    /* never reached with PAN (emulation) or SMAP */
}

An example of a security issue that PAN emulation mitigates.

Kernel Address Space Layout Randomization (KASLR)

Android has included support for Address Space Layout Randomization (ASLR) for years. Randomizing memory layout makes code reuse attacks probabilistic and therefore more difficult for an attacker to exploit, especially remotely. Android 8.0 brings this feature to the kernel. While Linux has supported KASLR on x86 since version 3.14, KASLR for ARM64 has only been available upstream since Linux 4.6. Android 8.0 makes KASLR available in Android kernels 4.4 and newer.

KASLR helps mitigate kernel vulnerabilities by randomizing the location where kernel code is loaded on each boot. On ARM64, for example, it adds 13–25 bits of entropy depending on the memory configuration of the device, which makes code reuse attacks more difficult.

Post-init read-only memory

The final hardening feature extends existing memory protections in the kernel by creating a memory region that's marked read-only after the kernel has been initialized. This makes it possible for developers to improve protection on data that needs to be writable during initialization, but shouldn't be modified after that. Having less writable memory reduces the internal attack surface of the kernel, making exploitation harder.

Post-init read-only memory was introduced in upstream kernel version 4.6 and we have backported it to Android kernels 3.18 and newer. While we have applied these protections to some data structures in the core kernel, this feature is extremely useful for developers working on kernel drivers.

Conclusion

Android Oreo includes mitigations for the most common source of security bugs in the kernel. This is especially relevant because 85% of kernel security bugs in Android have been in vendor drivers that tend to get much less scrutiny. These updates make it easier for driver developers to discover common bugs during development, stopping them before they can reach end user devices.



Read the full article here by Android Developers Blog

Wednesday, 30 August 2017

Supervillain Plan

Someday, some big historical event will happen during the DST changeover, and all the tick-tock articles chronicling how it unfolded will have to include a really annoying explanation next to their timelines.

Read the full article here by xkcd.com

Researchers Find a Way To Disable Intel ME Component Courtesy of the NSA

An anonymous reader writes:Researchers from Positive Technologies -- a provider of enterprise security solutions -- have found a way to disable the Intel Management Engine (ME), a much-hated component of Intel CPUs that many have called a secret backdoor, even if Intel advertised it as a "remote PC management" solution. People have been trying for years to find a way to disable the Intel ME component, but have failed all this time. This is because disabling Intel ME crashes computers, as Intel ME is responsible for the initialization, power management, and launch of the main Intel processor. Positive Technologies experts revealed they discovered a hidden bit inside the firmware code, which when flipped (set to "1") will disable ME after ME has done its job and booted up the main processor. The bit is labelled "reserve_hap" and a nearby comment describes it as "High Assurance Platform (HAP) enable." High Assurance Platform (HAP) is an NSA program that describes a series of rules for running secure computing platforms. Researchers believe Intel has added the ME-disabling bit at the behest of the NSA, who needed a method of disabling ME as a security measure for computers running in highly sensitive environments. The original submission linked to a comment with more resources on the "Intel CPU backdoor" controversy.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Tuesday, 29 August 2017

ARCore: Augmented reality at Android scale

Posted by Dave Burke, VP, Android Engineering

With more than two billion active devices, Android is the largest mobile platform in the world. And for the past nine years, we've worked to create a rich set of tools, frameworks and APIs that deliver developers' creations to people everywhere. Today, we're releasing a preview of a new software development kit (SDK) called ARCore. It brings augmented reality capabilities to existing and future Android phones. Developers can start experimenting with it right now.

We've been developing the fundamental technologies that power mobile AR over the last three years with Tango, and ARCore is built on that work. But, it works without any additional hardware, which means it can scale across the Android ecosystem. ARCore will run on millions of devices, starting today with the Pixel and Samsung's S8, running 7.0 Nougat and above. We're targeting 100 million devices at the end of the preview. We're working with manufacturers like Samsung, Huawei, LG, ASUS and others to make this possible with a consistent bar for quality and high performance.

ARCore works with Java/OpenGL, Unity and Unreal and focuses on three things:

  • Motion tracking: Using the phone's camera to observe feature points in the room and IMU sensor data, ARCore determines both the position and orientation (pose) of the phone as it moves. Virtual objects remain accurately placed.
  • Environmental understanding: It's common for AR objects to be placed on a floor or a table. ARCore can detect horizontal surfaces using the same feature points it uses for motion tracking.
  • Light estimation: ARCore observes the ambient light in the environment and makes it possible for developers to light virtual objects in ways that match their surroundings, making their appearance even more realistic.

Alongside ARCore, we've been investing in apps and services which will further support developers in creating great AR experiences. We built Blocks and Tilt Brush to make it easy for anyone to quickly create great 3D content for use in AR apps. As we mentioned at I/O, we're also working on Visual Positioning Service (VPS), a service which will enable world scale AR experiences well beyond a tabletop. And we think the Web will be a critical component of the future of AR, so we're also releasing prototype browsers for web developers so they can start experimenting with AR, too. These custom browsers allow developers to create AR-enhanced websites and run them on both Android/ARCore and iOS/ARKit.

ARCore is our next step in bringing AR to everyone, and we'll have more to share later this year. Let us know what you think through GitHub, and check out our new AR Experiments showcase where you can find some fun examples of what's possible. Show us what you build on social media with #ARCore; we'll be resharing some of our favorites.



Read the full article here by Android Developers Blog

Sunday, 27 August 2017

3D OpenGL Acceleration For Windows Guests On QEMU Using VirGL/VirtIO

While there has been

VirGL

as one of the options for allowing 3D/OpenGL acceleration of Linux guests within QEMU/KVM virtual machines to allow the calls to be directed to the host system's OpenGL driver, that support hasn't been available when Windows is running as QEMU/KVM guest. That is changing though thanks in large part to this year's Google Summer of Code.

Nathan Gauër is the student developer wrapping up his work on GSoC 2017 for allowing a VirGL Windows guest driver to allow for OpenGL acceleration to Windows guests. He does have a working kernel driver for the Windows guest to communicate with the VirtIO GPU and an ICD OpenGL driver as the user-space driver part of the equation. This effort is just about getting OpenGL working on Windows and doesn't magically allow Direct3D or the like, for those that may think this would be a new approach for Windows gaming on Linux.


A screenshot provided by Nathan of the OpenGL driver working on Windows.

If you are interested in this experimental work for getting OpenGL to Windows guests under QEMU, see

this blog post

with additional details via

GitHub

.



Read the full article here by Phoronix

Smart code helps your phone browse the web twice as quickly

Many attempts at improving the speed of mobile web browsing involve some obvious sacrifices: Google's Accelerated Mobile Pages alter how you view the pages, while proxies introduce the risk of someone intercepting your sensitive data. Researchers have found a way to boost performance without those compromises, though. Their Vroom architecture loads mobile websites up to twice as quickly by optimizing how a site loads, no matter how that site is built -- even an AMP page stands to load faster. It boils down to loading more of the site at once, rather than the back and forth that usually takes place.

Typically, your phone's web browser has to process nearly 100 web links before you see an entire page. It has to make multiple requests and spend a lot of time idling. Vroom, however, bundles the info that a browser needs to load a page. When your browser requests info, the server also provides "hints" about other necessary resources and coordinates the delivery of that content to make the most of your phone's processor.

The code does have a catch, as the name suggests: the web server has to know how to reshuffle data. Even if Vroom was ready right away (it isn't), it'd take a while to propagate. However, there's a realistic chance of that happening. The project has the backing of Google's Faculty Research Award, the National Science Foundation and MIT, so there's clearly interest in translating this technology to the real world.

Source: University of Michigan



Read the full article here by Engadget

Friday, 25 August 2017

Go 1.9 is released

Go 1.9 is released

Aug 24, 2017, 23:00 (0 Talkback[s])

Today the Go team is happy to announce the release of Go 1.9. You can get it from the download page. There are many changes to the language, standard library, runtime, and tooling. This post covers the most significant visible ones. Most of the engineering effort put into this release went to improvements of the runtime and tooling, which makes for a less exciting announcement, but nonetheless a great release.

Complete Story



Read the full article here by Linux Today

Demystifying Cryptocurrencies, Blockchain, and ICOs

Despite significant advancements, cryptocurrencies remain highly controversial; while some tout it as “the next internet,” others view Bitcoin as “evil.” At their simplest, they are the newest fintech fad; yet at their most complex, they’re a revolutionary technology challenging the political and social underpinnings of society. This article will demystify cryptocurrencies’ appeal, its complex underlying technology, and its value. It will also examine the accounting and regulatory, and privacy issues surrounding the space.

Read the full article here by Toptal Blog

Drone Training

The joke will be on him in a few weeks when animal control shows up and takes custody of his Roomba.

Read the full article here by xkcd.com

Android 8.0 overhauls installing apps from unknown sources

This is a pretty big change, detailed only a few days ago. Eagle-eyed users of Android O will have noticed the absence of the 'Allow unknown sources' setting, which has existed since the earliest days of Android to facilitate the installation of apps from outside of Google Play and other preloaded stores. In this post we'll talk about the new Install unknown apps permission and the security benefits it brings for both Android users and developers. Google goes into more detail a few paragraphs down: In Android O, the Install unknown apps permission makes it safer to install apps from unknown sources. This permission is tied to the app that prompts the install - just like other runtime permissions - and ensures that the user grants permission to use the install source before it can prompt the user to install an app. When used on a device running Android O and higher, hostile downloaders cannot trick the user into installing an app without having first been given the go-ahead. This new permission provides users with transparency, control, and a streamlined process to enable installs from trusted sources. The Settings app shows the list of apps that the user has approved for installing unknown apps. Users can revoke the permission for a particular app at any time. Good move.

Read the full article here by OSNews

Microsoft will never again sneakily force Windows downloads on users

There have been various controversies with Windows 10, from issues with privacy and telemetry, to ads and forced upgrades. Following a court case, Microsoft has vowed to never force upgrade files onto users again. Windows users in Germany were particularly unimpressed when Microsoft forcibly downloaded many gigabytes of files to upgrade from Windows 7 and 8 to Windows 10. Having held out for 18 months, and losing its case twice, Microsoft has finally agreed to stop its nefarious tactics. After a lengthy battle with Germany's Baden-Würtenberg consumer rights center, Microsoft made the announcement to avoid the continuation of legal action.… [Continue Reading]


Read the full article here by Betanews

Thursday, 24 August 2017

Blizzard's making 'Overwatch' competitive seasons short and sweet

By now, Overwatch gamers are accustomed to getting updates. Blizzard keeps spoiling us with new characters, modes, and events. They in turn keep the game fresh for returning players 14 months down the line. Judging by its latest announcement, the developer will keep tightening the screws on its well-oiled blockbuster. For its sixth competitive season, Overwatch is receiving a decent shake-up.

The biggest change is that Blizzard is shortening competitive seasons to last two months instead of three. According to VP Jeff Kaplan, this will make the experience more fun and rewarding, while preventing it from stagnating. He added that despite the smaller seasons, you should still "roughly" get the same amount of competitive points for the year.

That's not all. Additional tweaks to the upcoming competitive season target skill rating decay. In the future, you'll only need to play five games per week at diamond and above to curb decay. And, if you do get hit by it, you'll only lose 25 skill rating instead of 50.

Blizzard's also changing how control maps work. Right now, an "inordinate" amount of these matches are going into overtime, which is resulting in "very long match times," explains Kaplan. To solve this, in season six control maps will be best out of three. This will make losing a control map "sting less." Placement matches are set to work differently too. In the past, Overwatch players were placed lower than where their skill rating should have been to give them more of a sense of progression. Next time round, your placement will more accurately reflect your skill level from the get go.

To further level the playing field, higher tiered matches will also be more balanced going forward. As a result, players won't find themselves in as many matches where they end up demolishing the opposition. This will result in more skill rating gains, but could also affect queues. "Hopefully it's not super noticeable to most of us," said Kaplan. "And hopefully it's a lot of fun, despite the fact that you might have to wait for a long time."

Kaplan is reassuring players that these changes will be closely monitored, in case they don't work as planned. In order to keep you hooked, Blizzard is already working on more tweaks for seasons seven and eight.

Source: Overwatch (YouTube)



Read the full article here by Engadget

Wednesday, 23 August 2017

Spyware backdoor prompts Google to pull 500 apps with >100m downloads

reader comments 1

At least 500 apps collectively downloaded more than 100 million times from Google's official Play Market contained a secret backdoor that allowed developers to install a range of spyware at any time, researchers said Monday.

The apps contained a software development kit called Igexin, which makes it easier for apps to connect to ad networks and deliver ads that are targeted to the specific interests of end users. Once an app using a malicious version of Igexin was installed on a phone, the developer kit could update the app to include spyware at any time, with no warning. The most serious spyware installed on phones were packages that stole call histories, including the time a call was made, the number that placed the call, and whether the call went through. Other stolen data included GPS locations, lists of nearby Wi-Fi networks, and lists of installed apps.

In a blog post published Monday, researchers from mobile security company Lookout wrote:

It is becoming increasingly common for innovative malware authors to attempt to evade detection by submitting innocuous apps to trusted app stores, then at a later time, downloading malicious code from a remote server. Igexin is somewhat unique because the app developers themselves are not creating the malicious functionality - nor are they in control or even aware of the malicious payload that may subsequently execute. Instead, the invasive activity initiates from an Igexin-controlled server.
The apps that contain the SDK included:

  • Games targeted at teens (one with 50M-100M downloads)
  • Weather apps (one with 1M-5M downloads)
  • Internet radio (500K-1M downloads)
  • Photo editors (1M-5M downloads)
  • Educational, health and fitness, travel, emoji, home video camera apps

Not all of the 500 apps had installed one of the plugins silently delivered by Igexin, but the researchers said the developer kit could have caused any of the apps to download and install such plugins whenever the development kit operators wanted. The type of plugin that could be delivered was limited by the Android permission system. Additionally, not all versions of Igexin delivered the spying functions. Versions that did relied on a plugin framework that allowed devices to load arbitrary code, as directed by the response to requests the devices made periodically to a server located at http://ift.tt/2vaVljl.

In an e-mail, a Google spokesman said: "We’ve taken action on these apps in Play, and automatically secured previously downloaded versions of them as well. We appreciate contributions from the research community that help keep Android safe."

Igexin officials didn't respond to an e-mail seeking comment for this post.



Read the full article here by Ars Technica

Tuesday, 22 August 2017

BlockedInRussia.com

Since 2012, the Russian government has implemented increasingly restrictive laws, starting by censoring individual URLs, domain names, and IP addresses contained within a federal blacklist.
Coming as no surprise, Russia has recently taken further action and continues to limit access to online information with the passing of a new law which will further limit the way users access and experience the world wide web.
In response, BlockedinRussia.com has been launched by a collective of activists concerned with internet freedom and privacy. The website will help shed light on the state of the internet in Russia today.
BlockedinRussia.com is a simple tool that checks if a website is blocked or accessible from Russia. Anyone can simply enter a website and check if it is accessible from within the country.

Saturday, 19 August 2017

Google countdown teases Android O reveal during solar eclipse

Google is capitalizing on the solar eclipse hype to tease out the latest version of Android. On Monday at 2:40 PM Eastern, the internet juggernaut says we'll "meet the next release of Android and all of its super (sweet) new powers." Given that the final beta shipped last month, this could mean a few possibilities: either the name will be revealed (my money is on Oreo), or we'll learn update's release day. After months of previews and non-final builds, at least we won't have much longer to wait for when we'll actually be able to download and install the new OS.

What does Android O entail? Decreased load times, better battery life and lot of onboard AI features like Google Lens (a visual search app), copy-less pasting and a picture-in-picture mode, among other things. None of them immediately call Nabisco's trademark cookie to mind, but hey, a guy can dream, right?

Source: Google



Read the full article here by Engadget

Here’s what happens to your retina if you view an eclipse without protection

reader comments 0

Americans are making their last dashes for glasses and viewers to watch the rare total solar eclipse that will glide across the continental US on Monday. Meanwhile, eye doctors are trying to clear away any orbiting debris that's obscuring vision safety information—and spotlight the dangers of unsafe viewing.

Everyone knows that watching an eclipse—or staring into the Sun in general—can damage eyes. But in a series of articles published Friday in JAMA and JAMA Ophthalmology, a group of ophthalmologists explains in detail how sunlight damages the retina, plus dispels some misconceptions about viewing techniques for the rare event. They also provide a case study of what happens when you go into an eclipse event eyeballs-out.

David Calkins and Paul Sternberg of The Vanderbilt Eye Institute in Nashville, Tennessee, (which will experience a total eclipse) wrote one of the pieces in JAMA Ophthalmology. In it, they point out that many people have the misconception that an eclipse allows safe viewing of the Sun—that the lunar disk will cover everything but the Sun’s beautiful corona. This is true for those lucky ones that are along the path of the total eclipse, albeit only briefly. For those in the totality path, the Sun’s core will be blotted out for no more than two minutes and 41 seconds. “However, for most people, at least some portion of the Sun’s core will be visible during the event,” Calkins and Sternberg note.

If total eclipse viewers take off their protection, it’s important to have it back in place before the moon begins sliding aside. Otherwise, you risk damage. Everyone else should be cautious the whole time.

Core sunlight is intense, at about 1,350 Watts per square meter irradiance, they note. With the refractive nature of our eyes, this leads to a hefty dose for the teeny 1.5mm fovea—the central pit of the retina. This pit is packed with photoreceptor cone cells responsible for color vision and is what gives us our sharp central vision.

Sunlight damages the eye in two distinct ways

Exposing your naked eye-ball to sunlight causes solar retinopathy (aka photic retinopathy or solar retinitis), which involves two types of damage to the retina, particularly the fovea: burns and photochemical toxicity.

The former may be the most easily understood. A big chunk of sunlight is near-infrared radiation (700 to 1,500 nanometers), which can cause heat and thus burns, Calkins and Sternberg note. Because our eyes lack pain receptors, we can scorch the fovea without even noticing as we gaze at the Sun.

But, the eye experts write, a "more pressing concern" is photochemical toxicity. The large wallop of visible light from the Sun gets soaked up by photoactive materials in the eye that are just itching to form free radicals and reactive oxygen species. These include heme proteins, melanosomes, lipofuscin, and other chemicals. Once generated, free radicals and reactive oxygen species can attack many types of molecules and break down membranes, leading to tissue damage and cell death. Once retina tissue is destroyed, it cannot regenerate.

The authors say that a study of 45 people who damaged their eyes during a 1999 eclipse in the UK supports the idea that photochemical toxicity is a frequent cause of vision damage in solar retinopathy. In those cases, most of the damage was not permanent. Only four of the 45 reported lingering symptoms—discomfort and vision problems—after seven months.

But, of course, irreversible damage is more likely with longer and larger exposures. A case report published by Ta C. Chang and Kara Cavuoto of the University of Miami’s Bascom Palmer Eye Institute detailed lasting injuries in a 12-year-old girl. She suffered severe vision damage after staring at the full Sun for about one full minute. The eye experts presented images of her damaged retina, one of which was obtained by optical coherence tomography (OCT), which is kind of like an ultrasound for the eye that uses light to create cross-sectional tissue images. The images show bright pitted spots (see white arrows) in her fovea where photoreceptor segments have been wiped out by Sun damage. On follow-up exams, her vision hadn’t gotten better.

nfrared (A) and macular optical coherence tomography (OCT) (B) images demonstrating a hyperreflective spot in the fovea (arrowheads), presumably representing disruption of the inner and outer photoreceptor segments without evidence of underlying retinal pigment epithelial defects, consistent with bilateral solar retinopathy.
Enlarge /

nfrared (A) and macular optical coherence tomography (OCT) (B) images demonstrating a hyperreflective spot in the fovea (arrowheads), presumably representing disruption of the inner and outer photoreceptor segments without evidence of underlying retinal pigment epithelial defects, consistent with bilateral solar retinopathy.

The JAMA Network

Here’s how to avoid such damage

In JAMA, Neil Bressler, of Johns Hopkins University School of Medicine, and colleagues, lay out how to safely view the eclipse.

First, what not to do: Don’t view the eclipse with your naked eye or unfiltered telescopes, binoculars, sunglasses (yeah, even if they’re really dark), camera lenses, or other optics devices. Don’t use anything that focuses light, even if you’re wearing eclipse glasses. NASA notes that the concentrated solar rays can damage the filter and enter your eyes.

To safely view the eclipse, turn to:

  • No. 14 Welders Glasses, available at welding supply stores.
  • Pinhole projectors.These allow you to see the eclipse by projecting sunlight through a pinhole onto a viewable surface (Note: it is not just looking through a pinhole). Here’s more information on these.
  • Aluminized Mylar filters/commercial glasses that do not have any damage or scratches. The American Astronomical Society says these should be verified by an accredited testing laboratory to meet the ISO 12312-2 international safety standard. Watch out, there are scams out there. The AAS has a handy list of reputable versions on their website.

If you do fear that your eyes have been damaged by sunlight, promptly see an eye doctor, who will be able to diagnose you with a clinical evaluation and diagnostic tests, such as OCT.

JAMA, 2017. DOI: 10.1001/jama.2017.9495 (About DOIs).

JAMA Ophthalmology, 2017. DOI: 10.1001/jamaophthalmol.2017.2936



Read the full article here by Ars Technica

Self-Driving Wheelchairs Debut in Hospitals and Airports


Autonomous vehicles can add a new member to their ranks—the self-driving wheelchair. This summer, two robotic wheelchairs made headlines: one at a Singaporean hospital and another at a Japanese airport.

The Singapore-MIT Alliance for Research and Technology, or SMART, developed the former, first deployed in Singapore’s Changi General Hospital in September 2016, where it successfully navigated the hospital’s hallways. It is the latest in a string of autonomous vehicles made by SMART, including a golf cart, electric taxi and, most recently, a scooter that zipped more than 100 MIT visitors around on tours in 2016.

The SMART self-driving wheelchair has been in development for about a year and a half, since January 2016, says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator in the SMART Future Urban Mobility research group. Today, SMART has two wheelchairs in Singapore and two wheelchairs at MIT being tested in a variety of settings, says Rus.

The robot’s computer uses data from three lidars to make a map.  A localization algorithm then determines where it is in the map. The chair’s six wheels lend stability, and the chair is designed to make tight turns and fit through normal-sized doorframes. “When we visited several retirement communities, we realized that the quality of life is dependent on mobility. We want to make it really easy for people to move around,” said Rus in a recent MIT statement.

A second autonomous wheelchair recently premiered at Haneda Airport in Tokyo, designed by Panasonic and Whill, Inc., creator of the Model A Whill wheelchair, a sleek, hi-tech wheelchair now on the market in Japan and the United States.

According to a recent press release, Panasonic is planning to conduct technical trials of the WHILL NEXT this year. Like the SMART wheelchair, the WHILL NEXT uses sensors to detect nearby obstacles. It also employs automation technology developed for Panasonic’s autonomous (and adorable) hospital delivery robot, HOSPI. The wheelchair identifies its position, selects routes, and moves to a chosen destination based on a user’s input into a smartphone app. It can even be hailed with the app – the Uber of wheelchairs.

The WHILL NEXT is also able to sync up with nearby wheelchairs to travel in a column, which is useful for a family or a group, the company notes. Best of all, each wheelchair automatically returns to its home base, reducing the need for airport staff to collect the chairs.

Beyond use in hospitals and airports, the SMART team says they envision a connected autonomous mobility system, where a user could use a scooter or wheelchair indoors at an office, zip outside and pick up a golf cart to cross the parking lot, and slip into an autonomous car to drive home. Recent studies with the scooter suggest the control algorithms work indoors as well as out, according to a press release last year. “The autonomous wheelchair could be very useful in any pedestrian environmen—including hospitals and airports —and we are exploring all these possibilities,” Rus tells IEEE Spectrum.

Yet the field faces the challenge of commercialization. Not all hi-tech wheelchairs have sold well, such as Dean Kamen’s stair-climbing iBot, whose $25,000 price tag was one reason the device was discontinued in 2009. But hopefully the next generation of wheelchairs won’t be as expensive, says Rus. “The system consists of an off-the-shelf wheelchair augmented with an autonomy package. We hope the price point of the autonomy package can come down to make the system affordable.”



Read the full article here by Computing: IEEE Spectrum

Friday, 18 August 2017

Push authentication can replace the password

For a vestige of the past, the password has managed to hold on and remain alive -- even though some of the top people in computing said that it had already died over a decade ago. In one of his more famous predictions, Microsoft founder Bill Gates said that passwords were on the way out already in 2004. Problem is that Gates, for all his wisdom, didn’t tell us what to use to replace passwords. "There is no doubt that over time, people are going to rely less and less on passwords," Gates said at the RSA conference in 2004. "People… [Continue Reading]


Read the full article here by Betanews

Wednesday, 16 August 2017

Intel’s self-driving supergroup brings Fiat Chrysler into the fold

Intel's self-driving supergroup has added another member to the team: Fiat Chrysler Automobiles (FCA). The partnership, which already includes BMW, Intel and Mobileye, has ambitious plans to get autonomous vehicles on the road by 2021, and FCA's inclusion is likely to make that goal more achievable.

FCA already has a decent chunk of autonomous expertise under its belt -- it partnered with Waymo (formerly Google) in 2016, adding 100 self-driving Pacifica minivans to the Waymo fleet -- but compared to its rivals the group lags behind. The new partnership gives the company access to the research and resources it needs to keep pace in the field.

But FCA brings something useful to the table, too. Scalability remains an issue for autonomous vehicles -- that is, making the technology work with different makes and models. FCA owns a number of very different brands, including Chrysler, Fiat and Jeep, so the partnership can now work towards autonomy across the board, from Jeeps used off-road to Fiats designed for zooming around cities.

The partnership aims to have 40 self-driving test vehicles on public roads by the end of 2017 -- a small step towards its grander plans to have Level 3 and Level 4/5 self-driving cars in production by 2021. FCA made a smart move getting involved at this stage -- no doubt other manufacturers will soon be looking for a piece of the action as well.

Via: The Verge



Read the full article here by Engadget

Monday, 14 August 2017

Bitcoin Has Passed $4,000

It was only last week that Bitcoin passed $3,000 for the first time, but now it's trading at over $4,000. Some say that the creation of Bitcoin Cash and the adoption of SegWit (which allows for more transactions) were responsible for the recent surge, having boosted confidence in the cryptocurrency's future. Wall Street's obsession with Bitcoin and its constant reporting may also be fueling the craze. So what happens next? No one knows. Bitcoin could crash 50% to $2,000 tomorrow or spike to $5,000 - and I don't think anyone who truly knows crypto would be surprised at either option. Everyone has a different opinion - some say the bubble is oversized and should have popped months ago - others think that bitcoin is currently just a fraction of what it could eventually trade at. Whichever camp you fall in, here's one friendly reminder: don't invest more than you can afford to lose - because if you ask anyone who's spent more than a few months in the cryptocurrency world they'll tell you it's a roller coaster. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Sunday, 13 August 2017

Intel EOLs Atom Chip Used for Microsoft HoloLens

Intel is discontinuing the processor used in Microsoft's HoloLens this Fall: customers have been asked to place their final orders on the Atom x5-Z8100P SoC by September 30, as final shipments will be made on October 30. The current-generation HoloLens was released last year with this very same Atom chip, and there is little word on what Microsoft will replace it with, being that the successor has been architected differently and tipped to run an AI co-processor. While the device will run Windows 10 and will be equipped with an HPU, it will also feature an AI co-processor integrated into the latter that will use neural networks to enable object and voice recognition skills without the need for an Internet connection. The HPU 2.0 with the programmable AI co-processor will be a self-sufficient device that will run on battery power of the next HoloLens (hence, its power consumption will not be too high). The HPU 2.0 and the AI co-processor were designed entirely in-house and therefore are tailored for usage model of the HoloLens. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Studio Ghibli reopens for Hayao Miyazaki's new film

Just a few years ago, Studio Ghibli's future was in the air after co-founder and legendary animator Hayao Miyazaki (supposedly) retired. The place is about to be jumping again, however, as the studio announced that it has re-opened to start production on a recently-announced new film by the not-so-retired Miyazaki. He was on hand for a small ceremony on July 3rd, where he "brought together his main collaborators already engaged on his new feature film to talk to them about the project," the company said in a news release (translated).

A re-opening normally wouldn't be newsworthy, but at one point, it seemed like Studio Ghibli -- behind masterpieces like Spirited Away, Princess Mononoke and Howl's Moving Castle -- would not produce any more films. Recently, however, Amazon announced that it would stream Ronja, the Robber's Daughter, a children's TV series directed by Miyazaki's son Goro. Shortly after that came the news that the studio would produce a new film directed by Hayao Miyazaki.

Studio Ghibli hasn't released any details about the film itself, but many fans think it will be an adaptation of Miyazaki's first CGI short film Boro the Caterpillar. That short was delayed, but producer Toshio Suzuki has said it will likely be released in 2019 ahead of the Tokyo 2020 Olympic games.

Miyazaki has threatened retirement before, and when he stepped down in 2013, the company announced it would restructure for the next generation of animators. The films, while beloved by animation aficionados and cineastes, have never made tons of money -- Studio Ghibli's best-grossing film was Spirited Away, which made $275 million back in 2001.

Over the years, however, the films have gained a much larger following thanks to streaming and DVD, so the new one will likely be met with unprecedented anticipation. Adding to that, Miyazaki will be 80 when it's completed, so this could definitely be his last film.

Via: Indie Wire

Source: Buta Connection (Facebook)



Read the full article here by Engadget

Friday, 11 August 2017

Mazda Will Bring the First Compression Ignition Gasoline Engine to Market


Diesel cars, no longer popular in Europe, are definitely a pariah in the US. Americans never warmed to them, and VW's scheme to dupe millions of customers and pollute the planet didn't help. But diesels provide better fuel economy than gasoline engines, even if they do emit more pollution. The ideal internal combustion engine, then, would combine the efficiency of a diesel with the (relatively) lower emissions of a gasoline engine.

Automotive engineers have spent decades trying to build just such an engine. Mazda just announced it's finally done it.

The Japanese automaker says the Skyactiv-X will be the world’s first commercially available compression ignition gasoline engine. I'll explain the tech in a moment, but the big takeaway is Mazda claims the engine is 20 to 30 percent more efficient than its current gas engines, and at least as efficient, if not more so, than its diesel engines.

This Skyactiv-X is part of Mazda's goofily titled "Sustainable Zoom-Zoom 2030" plan that includes a shift toward electric cars beginning in 2019. But Mazda knows EVs won't dominate the market anytime soon, and this engineering breakthrough suggests the auto industry isn't quite done improving internal combustion.

A New Kind of Boom

First, a primer for those of you who aren't petrolheads. Internal combustion engines, whether they burn gasoline or diesel fuel, generate power by compressing air in the cylinder, adding fuel, and detonating the mixture. That creates a small explosion, which forces a piston down, turning the crankshaft, and, through the drivetrain, the wheels. Gasoline engines use a spark plug to create the explosion. Diesel engines compress that air to a much greater degree, making the air inside the cylinder hot enough to detonate the fuel without a spark. That higher compression ratio means higher efficiency from the engine, or more miles from the fuel. Advantage: diesel.

In gas engines, the fuel is injected earlier, and the air is kept cooler with a lower compression ratio. That means everything is mixed better, resulting in a cleaner burn that produces fewer particulates (soot, basically) and less nitrogen oxide (which is linked to asthma and other respiratory problems). Advantage: gasoline.

Efficiency-minded automakers constantly strive to elevate the compression ratio in gas engines, but are limited by something called auto-ignition (aka knocking), which is when the heat causes the fuel to detonate at the wrong time. But engineers have also been working on embracing auto-ignition, which would allow them to run an engine at even higher compression, and get that increase in efficiency.

Such an engine is great in theory and usually works in a laboratory. General Motors, Honda, and Hyundai have even demonstrated variations on the theme in prototypes over the last decade. And Formula 1 cars use the technology.

But compression ignition is difficult to control outside of a lab, or the cost-no-object arena of F1 racing. Those explosions happen whenever the fuel and air mix reach a given temperature, so engineers sacrifice the millisecond-precise control of spark plugs. The engines are rough when cold, unpredictable when pushed hard, and too often noisy and rattly. Not exactly market-ready.

Mazda hasn't given full details on how it cracked this conundrum, but computers can help. A sophisticated system can control the temperature and pressure in each cylinder by varying turbo boost or valve timing. It can mix in exhaust gas to change the mixture of fuel and air, and calculate just the right amount of gas to throw in.

Plus, Mazda didn't completely abandon current technology. The Skyactiv-X uses something Mazda calls "spark controlled compression ignition," which means the engine has spark plugs and uses them when necessary, like when the engine is cold. It promises a seamless hand-off between sparking and spark-free driving modes. Just how well it works remains to be seen, but you'll be able to judge for yourself, when the first cars with the new engine go on sale in 2019.



Read the full article here by Wired Top Stories

Ethereum Ethminer Performance With Radeon & GeForce OpenCL - August 2017

Here are my latest Ethereum Ethminer benchmarks for those interested in mining this cryptocurrency using OpenCL on AMD Radeon and NVIDIA GeForce GPUs.

These are my latest Ethereum mining benchmarks that just finished up this morning. Radeon tests were done using the latest ROCm binaries on Ubuntu 16.04. The Radeon cards tested were the R9 290, RX 480, RX 560, RX 580, and R9 Fury. Yes, there will be Ethereum benchmarks on the Radeon RX Vega on launch day when that time comes. On the NVIDIA side was their 384.59 driver on Ubuntu 16.04 LTS with the GeForce GTX 960, GTX 970, GTX 980, GTX 980 Ti, GTX 1050, GTX 1060, GTX 1070, GTX 1080, and GTX 1080 Ti.

All the benchmarks were facilitated via the

Phoronix Test Suite

.

Additionally, the Phoronix Test Suite was monitoring GPU temperatures, AC system power consumption, and generating performance-per-Watt metrics.

Vega could quite possibly beat the GeForce GTX 1080 Ti with Ethminer given AMD's strong OpenCL performance with ROCm. It will be interesting to see later this month. Anyhow, this is the current state of things with the latest OpenCL Linux drivers.

Here were the GPU temperatures during testing... Of course, the GPU temperatures can vary a lot depending upon the AIB's cooling system and/or after-market cooling.

And the overall AC system power consumption during the Ethereum benchmarking process.

Very interestingly, the performance-per-Watt. This should be another interesting metric with Vega, but for now the GeForce GTX 1070 is leading when its comes to efficiency.

A Phoronix Test Suite module also provided the current performance-per-dollar for Ethereum GPU mining, but this is a bit of a mess considering the lack of availability currently for most Radeon Polaris graphics cards... Prices for both NVIDIA and AMD were based on selections at Amazon.

Stay tuned for more interesting tests ahead.



Read the full article here by Phoronix

Tuesday, 8 August 2017

Android 8.0 gets "streaming OS updates"

When you get that "out of space" error message during an update, you're only "out of space" on the user storage partition, which is just being used as a temporary download spot before the update is applied to the system partition. Starting with Android 8.0, the A/B system partition setup is being upgraded with a "streaming updates" feature. Update data will arrive from the Internet directly to the offline system partition, written block by block, in a ready-to-boot state. Instead of needing ~1GB of free space, Google will be bypassing user storage almost entirely, needing only ~100KB worth of free space for some metadata. I promise not to make some snide remark about Android's update mess.

Read the full article here by OSNews

(PR) Intel Unveils Full Intel Core X-series Processor Family Specs; 14- to 18-Core


Today, Intel is releasing the specifications for the 12- to 18-core processors: Intel Core i9-7920X, Intel Core i9-7940X, Intel Core i9-7960X and the Extreme Edition Intel Core i9-7980XE processors. Announced at Computex 2017, the Intel Core X-series processor family is the most powerful, scalable and accessible high-end desktop platform offered by Intel, designed to deliver the performance needed to meet extreme computing demands for virtual reality (VR), content creation, gaming and overclocking.

The new X-series processor family is the ultimate platform for content creators and gamers. Multitasking becomes extreme mega-tasking with simultaneous, compute-intensive, multithreaded workloads aligned in purpose, powered by up to 18 cores and 36 threads. And, with up to 68 PCIe 3.0 lanes on the platform, people have the ability to expand their systems with fast SSDs, up to four discrete GFX cards and ultrafast Thunderbolt 3 solutions.

Content creators can expect up to 20 percent better performance for VR content creation and up to 30 percent faster 4K video editing over the previous generation. This means less time waiting and more time designing new worlds and experiences. Gamers and enthusiasts will experience up to 30 percent faster extreme mega-tasking for gaming over the previous generation.

The 12-core Intel Core X-series processor will be available starting on Aug. 28, and 14- to 18-core Intel Core X-series processors will be available starting Sept. 25. The 4- to 10-core Intel Core X-series processors are already on shelves and available at multiple retailers, along with more than 200 Intel X299 Chipset motherboards.



Read the full article here by techPowerUp!

Monday, 7 August 2017

How to Screw Up Autonomous Cars

Holy crap! Turn any stop sign into a 45mph speed limit sign, as far as some driverless cars are concerned. Actually if you read through the article, this sort of thing will certainly have to be dealt with. I know no [H] readers would do this....well, strike that. This Car and Driver blog is worth a read. UW computer-security researcher Yoshi Kohno described an attack algorithm that uses printed images stuck on road signs. These images confuse the cameras on which most self-driving vehicles rely. In one example, explained in a document uploaded to the open-source scientific-paper site arXiv last week, small stickers attached to a standard stop sign caused a vision system to misidentify it as a Speed Limit 45 sign. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Saturday, 5 August 2017

Quake 2 With Realtime GPU Pathtracing

Edd Biddulph, computer graphics programmer has released a video showing his from-scratch GPU-based pathtracer created specifically for Quake 2. Despite running on a Titan Xp there is some noise, which is inevitable on today's hardware. By Biddulph's estimates, it will be another 15-20 years before GPU hardware is powerful enough to do real time path or ray tracing. I'll be honest, before seeing this and doing a little digging, I didn't know much about raytracing and pathtracing. Having learned a bit more this is quite incredible, and despite the YouTube compression, you can really see the impressive lighting effects coming from this 20 year old game. You can find the source code for this GPU pathtracer on GitHub This is a from-scratch GPU-based pathtracer created specifically for Quake 2. It has several optimisations which are only viable due to the typical characteristics of Quake 2 such as support for parallogram-shaped lightsources, BSP ray traversal, and special handling of sky 'surfaces' (portals). It doesn't officially have a name, but you can call it Raylgun. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Thursday, 3 August 2017

Functional World

Functional World

Read the full article here by turnoff.us - geek comic site

Epic Games Is Planning To Use Vulkan By Default For Unreal Engine On Linux

Epic Games has an interesting goal of eventually being able to use the Vulkan graphics API by default on Linux systems running the Unreal Engine.

Epic developers are still working on improving the Vulkan renderer and tweaking it for better performance and on visual/feature parity to Direct3D 11. With the upcoming Unreal Engine 4.17 release will be more Vulkan improvements.

While Windows gamers will continue seeing Direct3D used by default, Epic Games is hoping to see their Vulkan renderer eventually be the default when running this game engine on Linux systems, but for now OpenGL is the default.

This was one of the interesting tidbits of information during yesterday's

Khronos SIGGRAPH 2017 event

. The Vulkan/OpenGL recording is embedded below.

VIDEO



Read the full article here by Phoronix

The Drummer John Bonham

What Makes John Bonham Such a Good Drummer?..(Read...)



Read the full article here by Likecool

Wednesday, 2 August 2017

How to Write, Package and Distribute a Library in Python


Python is a great programming language, but packaging is one of its weakest points. It is a well-known fact in the community. Installing, importing, using and creating packages has improved a lot over the years, but it's still not on par with newer languages like Go and Rust that learned a lot from the struggles of Python and other mature languages. 

In this tutorial, you'll learn everything you need to know about writing, packaging and distributing your own packages. 

How to Write a Python Library

A Python library is a coherent collection of Python modules that is organized as a Python package. In general, that means that all modules live under the same directory and that this directory is on the Python search path. 

Let's quickly write a little Python 3 package and illustrate all these concepts.

The Pathology Package

Python 3 has an excellent Path object, which is a huge improvement over Python 2's awkward os.path module. But it's missing one crucial capability—finding the path of the current script. This is very important when you want to locate access files relative to the current script. 

In many cases, the script can be installed in any location, so you can't use absolute paths, and the working directory can be set to any value, so you can't use a relative path. If you want to access a file in a sub-directory or parent directory, you must be able to figure out the current script directory. 

Here is how you do it in Python:

To access a file called 'file.txt' in a 'data' sub-directory of the current script's directory, you can use the following code: print(open(str(script_dir/'data/file.txt').read())

With the pathology package, you have a built-in script_dir method, and you use it like this:

Yep, it's a mouthful. The pathology package is very simple. It derives its own Path class from pathlib's Path and adds a static script_dir() that always returns the path of the calling script. 

Here is the implementation:

Due to the cross-platform implementation of pathlib.Path, you can derive directly from it and must derive from a specific sub-class (PosixPath or WindowsPath). The script dir resolution uses the inspect module to find the caller and then its filename attribute.

Testing the Pathology Package

Whenever you write something that is more than a throwaway script, you should test it. The pathology module is no exception. Here are the tests using the standard unit test framework: 

The Python Path

Python packages must be installed somewhere on the Python search path to be imported by Python modules. The Python search path is a list of directories and is always available in sys.path. Here is my current sys.path:

Note that the first empty line of the output represents the current directory, so you can import modules from the current working directory, whatever it is. You can directly add or remove directories to/from sys.path. 

You can also define a PYTHONPATH environment variable, and there a few other ways to control it. The standard site-packages is included by default, and this is where packages you install using via pip go. 

How to Package a Python Library

Now that we have our code and tests, let's package it all into a proper library. Python provides an easy way via the setup module. You create a file called setup.py in your package's root directory. Then, to create a source distribution, you run: python setup.py sdist

To create a binary distribution called a wheel, you run: python setup.py bdist_wheel

Here is the setup.py file of the pathology package:

It includes a lot of metadata in addition to the 'packages' item that uses the find_packages() function imported from setuptools to find sub-packages.

Let's build a source distribution:

The warning is because I used a non-standard README.md file. It's safe to ignore. The result is a tar-gzipped file under the dist directory:

And here is a binary distribution:

The pathology package contains only pure Python modules, so a universal package can be built. If your package includes C extensions, you'll have to build a separate wheel for each platform:

For a deeper dive into the topic of packaging Python libraries, check out How to Write Your Own Python Packages.

How to Distribute a Python Package

Python has a central package repository called PyPI (Python Packages Index). When you install a Python package using pip, it will download the package from PyPI (unless you specify a different repository). To distribute our pathology package, we need to upload it to PyPI and provide some extra metadata PyPI requires. The steps are:

  • Create an account on PyPI (just once).
  • Register your package.
  • Upload your package.

Create an Account

You can create an account on the PyPI website. Then create a .pypirc file in your home directory:

For testing purposes, you can add a "pypitest" index server to your .pypirc file:

Register Your Package

If this is the first release of your package, you need to register it with PyPI. Use the register command of setup.py. It will ask you for your password. Note that I point it to the test repository here:

Upload Your Package

Now that the package is registered, we can upload it. I recommend using twine, which is more secure. Install it as usual using pip install twine. Then upload your package using twine and provide your password (redacted below):

For a deeper dive into the topic of distributing your packages, check out How to Share Your Python Packages.

Conclusion

In this tutorial, we went through the fully fledged process of writing a Python library, packaging it, and distributing it through PyPI. At this point, you should have all the tools to write and share your libraries with the world.

Additionally, don’t hesitate to see what we have available for sale and for study in the marketplace, and please ask any questions and provide your valuable feedback using the feed below.



Read the full article here by Nettuts+