Thursday, 16 November 2017

Visual Studio Live Share gives you pair programming without the shared keyboards

(PR) VESA Rolls Out DisplayID Version 2.0

Python in Unreal Engine 4

While Autodesk University might seem like too much of a manufacturing kind of thing for most game developers, a lot of game companies actually take part in this initiative. Just a couple of hours ago we’ve received a little email from Epic Games, giving a little glimpse of using Python and Blueprints in Unreal Engine for the purpose of data preparation. The piece is published by Ken Pimentel, the senior product manager at Epic.

Where we need to go

To get to a truly frictionless workflow, we also have to address issues of automating and preparing data for real-time use. We can’t always expect data pushed through Datasmith to be “render perfect” or “optimized,” so we have to provide some means of addressing these kinds of issues in a non-destructive manner (meaning you can easily make changes to the upstream data without repeating work you’ve already done).

 PythonInUnreal_1.jpg

Traditional workflows to get data ready for the Unreal Engine rely on other tools to optimize the data.

The future revealed

This week at Autodesk University, for the first time we’re showing an early preview of using Python and Blueprints in Unreal Engine for the purpose of data preparation.
 
PythonInUnreal_2.jpg

 Future workflows to get data ready will use Datasmith and Unreal Engine-based tools.

The demo provides a quick window into the possibilities when users have the option of working in Python and/or Epic’s Blueprint Visual Scripting to take advantage of new, sophisticated data preparation tools that process and optimize scene data. Data prep functions we’re exploring range from “find all parts smaller than 1 cubic centimeter and decimate them” to “find these objects and automatically repair them (fixing bad topology, gaps and overlaps).”

PythonInUnreal_3.jpg

 In this example, the motorcycle is processed with a series of simple rules that create a more optimized version.

You can find the full post over here.


© kikundra for 80lvl, 2017. | Permalink | No comment | Add to del.icio.us
Post tags: , ,

Feed enhanced by Better Feed from Ozh



Read the full article here by 80lvl

Wednesday, 15 November 2017

Temperature Preferences

There's a supposed Mark Twain quote, "The coldest winter I ever spent was a summer in San Francisco." It isn't really by Mark Twain, but I don't know who said it—I just know they've never been to McMurdo Station.

Read the full article here by xkcd.com

OnePlus left a backdoor in its devices with root access

Just a month ago, OnePlus was caught collecting personally identifiable data from phone owners through incredibly detailed analytics. While the company eventually reversed course on the data collection, another discovery has been made in the software of OnePlus phones. One developer found an application intended for factory testing, and through some investigation and reverse-engineering, was able to obtain root access using it. People often tout OnePlus phones as an alternative to the Pixel line now that Google abandoned the Nexus concept of affordable, high-quality phones. Recent events, however, have made it very clear that you should really steer clear of phones like this, unless you know very well what you're doing.

Read the full article here by OSNews

Tuesday, 14 November 2017

There are over a billion outdated Android devices in use

How out of date are android devices? It’s common knowledge that Android device tend to be more out of date than iOS devices, but what does this actually mean? Let’s look at android marketshare data to see how old devices in the wild are. The x axis of the plot below is date, and the y axis is Android marketshare. The share of all devices sums to 100% (with some artifacts because the public data Google provides is low precision).

Color indicates age:

  • blue: current (API major version)
  • yellow: 6 months
  • orange: 1 year
  • dark red: 2 years
  • bright red/white: 3 years
  • light grey: 4 years
  • grey: 5 years
  • black: 6 years or more

If we look at the graph, we see a number of reverse-S shaped contours; between each pair of contours, devices get older as we go from left to right. Each contour corresponds to the release of a new android version and the associated devices running that android version. As time passes, devices on that version get older. When a device is upgraded, they’re effectively removed from one contour into a new contour and the color changes to a less outdated color.

There are three major ways in which this graph understates the number of outdated devices:

First, we’re using API version data for this and don’t have access to the marketshare of point releases and minor updates, so we assume that all devices on the same API version are up to date until the moment a new API version is released, but many (and perhaps most) devices won’t receive updates within an API version.

Second, this graph shows marketshare, but the number of Android devices has dramatically increased over time. For example, if we look at the 80%-ile most outdated devices (i.e., draw a line 20% up from the bottom), it the 80%-ile device today is a few months more outdated than it was in 2014. The huge growth of Android means that there are many many more outdated devices now than there were in 2014.

Third, this data comes from scraping Google Play Store marketshare info. That data shows marketshare of devices that have visited in the Play Store in the last 7 days. In general, it seems reasonable to believe that devices that visit the play store are more up to date than devices that don’t, so we should expect an unknown amount of bias in this data that causes the graph to show that devices are newer than they actually are.

One thing we can see from that graph is that, as time goes on, the world accumulates a larger fraction of old devices over time. This makes sense and we could have figured this out without looking at the data. After all, back at the beginning of 2010, Android phones couldn’t be much more than a year old, and now it’s possible to have Android devices that are nearly a decade old.

Something that wouldn’t have been obvious without looking at the data is that the uptake of new versions seems to be slowing down – we can see this by looking at the last few contour lines at the top right of the graph, corresponding to the most recent Android releases. These lines have a shallower slope than the contour lines for previous releases. Unfortunately, with this data alone, we can’t tell why the slope is shallower. Some possible reasons might be:

  • Android growth is slowing down
  • Android device turnover (device upgrade rate) is slowing down
  • Fewer devices are receiving updates

Without more data, it’s impossible to tell how much each of these is contributing to the problem. BTW, let me know if you know of a reasonable source for the active number of Android devices going back to 2010! I’d love to produce a companion graph of the total number of outdated devices.

But even with the data we have, we can take a guess at how many outdated devices are in use. In May 2017, Google announced that there are over two billion active Android devices. If we look at the latest stats (the far right edge), we can see that nearly half of these devices are two years out of date. At this point, we should expect that there are more than one billion devices that are two years out of date! Given Android’s update model, we should expect approximately 0% of those devices to ever get updated to a modern version of Android.

Percentiles

Since there’s a lot going on in the graph, we might be able to see something if we look at some subparts of the graph. If we look at a single horizontal line across the graph, that corresponds to the device age at a certain percentile:

Over time, the Nth percentile out of date device is getting more out of date

In this graph, the date is on the x axis and the age in months is on the y axis. Each line corresponds to a different percentile (higher percentile is older), which corresponds to a horizontal slice of the top graph at that percentile.

Each individual line seems to have two large phases (with some other stuff, too). There’s one phase where devices for that percentile get older as quickly as time is passing, followed by a phase where, on average, devices only get slightly older. In the second phase, devices sometimes get younger as new releases push younger versions into a certain percentile, but this doesn’t happen often enough to counteract the general aging of devices. Taken as a whole, this graph indicates that, if current trends continue, we should expect to see proportionally more old Android devices as time goes on, which is exactly what we’d expect from the first, busier, graph.

Dates

Another way to look at the graph is to look at a vertical slice instead of a horizontal slice. In that case, each slice corresponds to looking at the ages of devices at one particular date:

In this plot, the x axis indicates the age percentile and the y axis indicates the raw age in months. Each line is one particular date, with older dates being lighter / yellower and newer dates being darker / greener.

As with the other views of the same data, we can see that Android devices appear to be getting more out of date as time goes on. This graph would be too busy to read if we plotted data for all of the dates that are available, but we can see it as an animation:

iOS

For reference, iOS 11 was released two months ago and it now has just under 50% iOS marketshare despite November’s numbers coming before the release of the iPhone X (this is compared to < 1% marketshare for the latest Android version, which was released in August). It’s overwhelmingly likely that, by the start of next year, iOS 11 will have more than 50% marketshare and there’s an outside chance that it will have 75% marketshare, i.e., it’s likely that the corresponding plot for iOS would have the 50%-ile (red) line in the second plot at age = 0 and it’s not implausible that the 75%-ile (orange) line would sometimes dip down to 0. As is the case with Android, there are some older devices that stubbornly refuse to update; iOS 9.3, released a bit over two years ago, sits at just a bit above 5% marketshare. This means that, in the iOS version of the plot, it’s plausible that we’d see the corresponding 99%-ile (green) line in the second plot at a bit over two years (half of what we see for the Android plot).

Conclusion

We’ve seen that Android devices appear to be getting more out of date over time. This makes it difficult for developers to target “new” Android API features, where new means anything introduced in the past few years. It also means that there are a lot of Android devices out there that are behind in terms of security. This is true both in absolute terms and also relative to iOS.

Until recently, Android was directly tied to the hardware it ran on, making it very painful to keep old devices up to date because that requiring a custom Android build with phone-specific (or at least SoC-specific work). Google claims that this problem is fixed in the latest Android version (8.0, Oreo). People who remember Google’s “Android update alliance” annoucement in 2011 may be a bit skeptical of the more recent annoucement. In 2011, Google and U.S. carries announced that they’d keep devices up to date for 18 months, which mostly didn’t happen. However, even if the current annoucement isn’t smoke and mirrors and the latest version of Android solves the update probem, we’ve seen that it takes years for Android releases to get adopted and we’ve also seen that the last few Android releases have significantly slower uptake than previous releases. Additionally, even though this is supposed to make updates easier, it looks like Android is still likely to stay behind iOS in terms of updates for a while. Google has promised that its latest phone (Pixel 2, 10/2017) will get updates for three years. That seems like a step in the right direction, but as we’ve seen from the graphs above, extending support by a year isn’t nearly enough to keep most Android devices up to date. But if you have an iPhone, the latest version of iOS (released 9/2017) works on devices back to the iPhone 5S (released 9/2013).

If we look at the newest Android release (8.0, 8/2017), it looks like you’re quite lucky if you have a two year old device that will get the latest update. The oldest “Google” phone supported is the Nexus 6P (9/2015), giving it just under two years of support.

If you look back at devices that were released around when the iPhone5S, the situation looks even worse. Back then, I got a free Moto X for working at Google; the Moto X was about as close to an official Google phone as you could get at the time (this was back when Google owned Moto). The Moto X was released on 8/2013 (a month before the iPhone 5S) and the latest version of Android it supports is 5.1, which was released on 2/2015, a little more than a year and a half later. For an Android phone of its era, the Moto X was supported for an unusually long time. It’s actuall a good sign that things look worse as look further back in time, but at the rate things are improving, it will be years before there’s a decently supported Android device released and then years beyond those years before that Android version is in widespread use. It’s possible that Fuchsia will fix this, but Fucshia is also many years away from widespread use.

In a future post, we’ll look at Android response latency is also quite interesting. It’s much more variable between phones than iOS response latency is between different models of iPhone.

The main thing I’m missing from my analysis of phone latency is older phones. If you have an old phone I haven’t tested and want to donate it for testing, you can mail it to:

Dan Luu
Recurse Center
455 Broadway, 2nd Floor
New York, NY 10013

Thanks to Leah Hanson, Kate Murphy, Marek Majkowski, @zofrex, @Aissn, Chris Palmer, and an anonymous person for comments/corrections/related discussion.

Also, thanks to Victorien Villard for making the data these graphs were based on available!



Read the full article here by Dan Luu

Mercedes' ubiquitous city bus is going electric in 2018

Mercedes-Benz wants an electric version of its long-running city bus to power the emissions-free mass transit future. Over a dozen cities recently pledged to make their bus fleets all-electric by 2030 (among them LA and Seattle). And, if the German auto-maker gets its way, its EV will be the one ferrying people around all hours of the day. Mercedes announced that it plans to start producing the electric "Citaro" by the end of 2018.

In the meantime, it's begun testing prototypes in "extreme" conditions, like the icy roads of the Arctic circle and on the sweltering tarmac in Spain's Sierra Nevada. The move comes just days after its parent company Daimler unveiled its electric school bus, further emphasizing its commitment to electric public transportation.

The "ultra quiet" bus is powered by modular, lithium-ion battery packs that fuel the electric wheel hub motors at the rear axle. The company also touts its energy-efficient climate control system as a range booster. But, it will have its work cut out to beat EV manufacturer Proterra -- which recently set the record for longest distance traveled by an electric bus on a single charge (a whopping 1,101.2 miles). It's also facing competition from Navya, the French firm behind the Las Vegas driverless shuttle service that just started back up (albeit with a hiccup). Not to mention household names like Volvo and Toyota.

Source: Daimler



Read the full article here by Engadget

Germany's future trains have digital cars with game consoles

The impending era of driverless rides could prove disruptive for traditional modes of public transport. But, Germany's state-owned railway company thinks it has a solution, and it sounds like a lot of fun -- especially for gamers. It just unveiled plans for a new train complete with a digital coach that packs TVs and game consoles. Deutsche Bahn's "Ideenzug" ("Idea Train") will also boast gym equipment and a play area for kids. The railway firm envisions the project as a way to keep pace with, and even overtake, self-driving cars. (If it thinks autonomous vehicles are a scary prospect, wait till it learns of the Hyperloop).

For people turned off by backward-facing chairs, the train will boast swivelling seats (some of which will also come with bulges on either side for added privacy). Deutsche Bahn is currently showing off a replica of the project. But, making it a reality could prove a tricky prospect, with some claiming that it may not pass security regulations, as reported by German newspaper Süddeutsche Zeitung. The sticking point seems to be the gym equipment, so gaming will likely still be on the cards.

Railway travel is changing elsewhere too, but these modifications have hinged on green energy and faster journeys. India, for example, recently debuted a solar-powered train, while Japan is known for its high-speed bullet trains. But, Deutsche Bahn's project seems retrofitted to accommodate the modern-day traveller. Someone who wants to get in a workout, catch up on work, and indulge in some multiplayer action en route to their destination. All that's missing is Netflix.

Source: Deutsche Bahn



Read the full article here by Engadget

Quantum-ized Firefox 57 Ready For Download

Firefox 57.0 is being officially released this week and its stable download is now available...

Read the full article here by Phoronix

Qualcomm rejects Broadcom’s buyout bid

Last week, wireless chip manufacturer Broadcom made a $130 billion bid to buy out rival Qualcomm. The unsolicited proposal was expected to be rejected by Qualcomm, and as of this morning, it has been. The company's board of directors unanimously voted against Broadcom's bid saying that it undervalued Qualcomm.

"It is the board's unanimous belief that Broadcom's proposal significantly undervalues Qualcomm relative to the company's leadership position in mobile technology and our future growth prospects," Paul Jacobs, Qualcomm's chairman of the board, said in a statement. Qualcomm CEO Steve Mollenkopf added that the company's technology -- its 5G tech in particular -- puts it in a position to generate additional value for stockholders beyond what Broadcom's bid suggests, and Qualcomm's director, Tom Horton said, "We are highly confident that the strategy Steve and his team are executing on provides far superior value to Qualcomm shareholders than the proposed offer."

This is unlikely to be the end of the attempted takeover, however. Sources told Reuters that prior to Qualcomm's official rejection, Broadcom was already considering a larger bid. It was also exploring the possibility of putting the decision in the shareholders' hands by nominating individuals to Qualcomm's board who would engage with negotiation talks. If shareholders are more open to the buyout than the current board is, they would then have the option of voting in new members who are more supportive of the deal.

Source: Qualcomm



Read the full article here by Engadget

Saturday, 11 November 2017

FreeCS: Aiming For An Open-Source Counter-Strike Implementation

The latest open-source game project working on an open-source engine re-implementation of a popular game is FreeCS that is aiming to be a free software re-implementation of Counter-Strike.

Before getting too excited, FreeCS isn't targeting Counter-Strike: Global Offensive, Counter-Strike: Source, nor Counter-Strike 1.6, but rather Counter-Strike 1.5. Nevertheless, plenty of nostalgic Linux gamers will probably be interested.

FreeCS is re-implementing the original Counter-Strike using the FTE QuakeWorld engine. As with other open-source re-implementation projects built around commercial games, none of the game assets are bundled so you need to either have a copy already of Half-Life or Counter-Strike. Additionally, FreeCS makes no use of the Half-Life SDK.

Those wanting to check out this project can find it at

GitHub

.



Read the full article here by Phoronix

Google Working To Remove MINIX-Based ME From Intel Platforms

An anonymous reader quotes a report from Tom's Hardware: Intel's Management Engine (ME) technology is built into almost all modern Intel CPUs. At the Embedded Linux Conference, a Google engineer named Ronald Minnich revealed that the ME is actually running its own entire MINIX OS and that Google is working on removing it. Due to MINIX's presence on every Intel system, the barebones Unix-like OS is the most widely deployed operating system in the world. Intel's ME technology is a hardware-level system within Intel CPUs that consists of closed-source firmware running on a dedicated microprocessor. There isn't much public knowledge of the workings of the ME, especially in its current state. It's not even clear where the hardware is physically located anymore. What's concerning Google is the complexity of the ME. Public interest in the subject piqued earlier this year when a vulnerability was discovered in Intel's Active Management Technology (AMT), but that's just a software that runs on ME--ME is actually an entire OS. Minnich's presentation touched on his team's discovery that the OS in question is a closed version of the open-source MINIX OS. The real focus, though, is what's in it and the consequences. According the Minnich, that list includes web server capabilities, a file system, drivers for disk and USB access, and, possibly, some hardware DRM-related capabilities. It's not known if all this code is explicitly included for current or future ME capabilities, or if it's because Intel simply saw more potential value in keeping rather than removing it.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

(PR) Qualcomm Starts Shipping 48-Core Centriq 2400 Processors


At a press conference held today in San Jose, Calif., Qualcomm Datacenter Technologies, Inc., a subsidiary of Qualcomm Incorporated (NASDAQ: QCOM), officially announced commercial shipment of the world's first and only 10 nanometer server processor series: the Qualcomm Centriq 2400 processor family. The Qualcomm Centriq 2400 processor family is the first high-performance Arm-based processor series designed to offer groundbreaking throughput performance for cloud workloads running in today's datacenters. Purpose built for cloud, the Qualcomm Centriq 2400 server processor family delivers exceptional performance-per-watt and performance-per dollar.

"Today's announcement is an important achievement and the culmination of more than four years of intense design, development and ecosystem enablement effort," said Anand Chandrasekher, senior vice president and general manager, Qualcomm Datacenter Technologies, Inc. "We have designed the most advanced Arm-based server processor in the world that delivers high performance coupled with the highest energy efficiency, enabling our customers to realize significant cost savings."

Qualcomm Centriq 2400 Processor Family Technical Specs

The Qualcomm Centriq 2400 processor family is a single chip platform-level solution built using Samsung's 10 nanometer FinFET process with 18 billion transistors on only 398 mm². It contains up to 48 high-performance, 64-bit, single-thread cores, running at up to 2.6 GHz frequency. The cores are connected with a bi-directional segmented ring bus with 250GB/s of aggregate bandwidth to avoid performance bottlenecks under full load. To maximize performance under various use cases, the design has 512KB of shared L2 cache for every two cores, and 60 MB of unified L3 cache distributed on the die. It has 6 channels of DDR4 memory and can support up to 768 GB of total DRAM capacity with 32 PCIe Gen3 lanes and 6 PCIe controllers. The Qualcomm Centriq 2400 processor family also supports Arm's TrustZone secure operating environment, and supports hypervisors for virtualization. The Qualcomm Centriq 2400 is able to achieve exceptional performance, while consuming less than 120 watts.

With a list price of $1,995, the 48 core Qualcomm Centriq 2460 processor offers greater than 4X better performance per dollar and up to 45% better performance per watt versus Intel's highest-performance Skylake processor, the Intel Xeon Platinum 818.

A number of cloud service providers and technology companies participated in the launch event and demonstrated applications of an ARM-based datacenter running on the Qualcomm Centriq 2400 series. These included Alibaba, LinkedIn, Cloudflare, American Megatrends Inc., Arm, Cadence Design Systems, Canonical, Chelsio Communications, Excelero, Hewlett Packard Enterprise, Illumina, MariaDB, Mellanox, Microsoft Azure, MongoDB, Netronome, Packet, Red Hat, ScyllaDB, 6WIND, Samsung, Solarflare, Smartcore, SUSE, Synopsys, Uber, and Xilinx.

Optimized for common cloud workloads, the Qualcomm Centriq 2400 processor family has been designed to deliver throughput performance for highly threaded cloud native applications that are developed as micro-services and deployed for scale-out. Some of the key applications and use cases demonstrated today include:

  • Web front end with HipHop Virtual Machine
  • NoSQL databases including MongoDB, Varnish, Scylladb
  • Cloud orchestration and automation including Kubernetes, Docker, metal-as-a-service
  • Data analytics including Apache Spark
  • Deep learning inference
  • Network function virtualization
  • Video and image processing acceleration
  • Multi-core electronic design automation
  • High throughput compute bioinformatics
  • Neural class networks
  • OpenStack Platform
  • Scaleout Server SAN with NVMe
  • Server-based network offload


Read the full article here by techPowerUp!

Wednesday, 8 November 2017

2x Called Off: Bitcoin Hard Fork Suspended for Lack of Consensus

Alyssa Hertig, writing for CoinDesk: The organizers of a controversial bitcoin scaling proposal are suspending an attempt to increase the block size by way of a software upgrade. Known for its strong early support from bitcoin startups and mining pools, the plan, called Segwit2x, or simply 2x, was to trigger a block size increase at block 494784, expected to occur on or around November 16th. The suspension was announced today in an email, written by Mike Belshe, CEO and co-founder of bitcoin wallet software provider BitGo. One of the leaders of the Segwit2x project, he argued that the scaling proposal is too controversial to move forward. He wrote: "Unfortunately, it is clear that we have not built sufficient consensus for a clean block size upgrade at this time. Continuing on the current path could divide the community and be a setback to Bitcoin's growth. This was never the goal of Segwit2x."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

BitTorrent Inventor Announces Eco-Friendly Bitcoin Competitor Chia

A Bitcoin transaction wastes as much electricity as it takes to power an American home for a week, and legendary coder Bram Cohen wants to fix that. He plans to use proof of time and storage rather than electricity-burning proofs of work. I'm as green as the next guy (maybe not), but if crypto-currency is really going to replace fiat currency then I see no reason why a more energy friendly solution isn't a good thing. Chia instead relies on of proofs of space in file storage, which people often already have and can use for no additional cost. It combines this with proofs of time that disarm a wide array of attacks to which proofs of space are susceptible. Discussion

Read the full article here by [H]ardOCP News/Article Feed

In Amazon’s game engine, voice actors can now be replaced with robots

Hawking: AI Could Be "Worst Event in the History of Our Civilization"

Scientist Stephen Hawking warned AI could serve as the "worst event in the history of our civilization" unless humanity is prepared for its possible risks. While AI could potentially undo damage done to the Earth and cure diseases, he notes that it may also spur the creation of powerful autonomous weapons of terror that could be used as a tool "by the few to oppress the many." "Success in creating effective AI could be the biggest event in the history of our civilization, or the worst," he said. Hawking called for more research in AI on how to best use the technology, as well as implored scientists to think about AI's impact. "Perhaps we should all stop for a moment and focus our thinking on not only making AI more capable and successful, but maximizing its societal benefit," he said. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Tuesday, 7 November 2017

Intel Clear Linux on Udoo X86 How-To

The Intel Clear Linux Project is a Linux distribution built for cloud and IoT use cases.


The aim is to showcase the best of Intel architecture technology and performance, from low-level kernel features to complex applications that span across the entire OS stack. The focus is on Power and Performance optimizations throughout the operating system as a whole.

The distribution is quite recent and Intel is also focusing on docker containers.




Phoronix website already highlighted the improvements it can carry over other common distributions (see also here).

How do we start? The Intel website offers a light tutorial you can find at this page.

The first step is to verify the Udoo X86 is equipped as a compatible processor.
You can validate it with the script clear-linux-check-config.sh.

You'll get this output:

luca@udoox86 ~ $ ./clear-linux-check-config.sh host
Checking if host is capable of running Clear Linux* OS for Intel® Architecture

SUCCESS: Intel CPU
SUCCESS: 64-bit CPU (lm)
SUCCESS: Streaming SIMD Extensions v4.1 (sse4_1)
SUCCESS: EFI firmware

Follow the tutorial to prepare the USB drive. Nothing uncommon compared to other distros.

But...
You'll get a surprise when rebooting and selecting the USB drive as primary boot. You will be able to proceed and get to the partitioning setup page... A wild Python backtrace will appear!
An error saying the command "fdisk -l on /dev/mmcblk0rpmb" returned an error.

The device is a Replay Protected Memory Block. I am not an expert and I could be totally wrong, but it should be a way to provide a separate trusted store, which usually an OEM could store IP to protect. The access is enforced by a secure API. This document about TEE OS should clarify some detail.

We're lucky the installer is written in Python.
Reboot and plug the USB drive on your favorite device running Linux.

Identify the following file /usr/bin/ister_gui.py and open it for editing.
We need to modify the function get_list_of_disks().
The function consists in running a lsblk and extracting all the devices which are not the current root device, generating a list.

In order to exclude the unwanted devices, add the following line:

    for part in parts:
        part = part.strip()
        if 'boot' in part or 'rpm' in part:
            continue

        if 'disk' in part:
            disks.append(part.split()[0])
        elif part.endswith('/'):
            root_disk = part.split()[0]


There are more elegant ways to do so, but it just works™.

Once done, you can unmount the USB drive and reboot. If everything is fine, you'll be able to get past the partition setup page and you'll be able to properly finalize the installation.
It's a network installation, so be patient (even on a Gigabit fiber connection, if you're writing to the emmc).

In case you do not want to install on the emmc, it should be possible to even disable the device through the UEFI and this "fix" should not be necessary. I'll update once I can confim that.

After rebooting and selecting the internal emmc as boot device, you should be able to boot for the first time on the Clear Linux distro!

You'll notice an error at kernel boot, still related to the RPMB device. It should be something like:

systemd-gpt-auto-generator[1765]: Failed to dissect: Input/output error

As the distribution is quite recent, it includes the workaround detailed at this issue on systemd. A fix has been integrated in master, but probably not yet released. The workaround is to pass the following kernel boot parameter:

systemd.gpt_auto=0

Clear Linux uses clr-boot-manager. It is a Kernel & Boot loader manager.
It has been also integrated in Solus OS.
It's actually thanks to the Solus OS webpage that I understood how to add the flag no matter the kernel in use.

Become root and create a new file /etc/kernel/cmdline and add the previous systemd boot parameter. Save the file, clr-boot-manager update and reboot. The message at boot should disappear!



Remember, if you choose to install Linux on the emmc, disable the swap as the device will probably suffer from intense usage.

Feel free to comment and suggest corrections!

Have a nice day!

MINIX: Intel's Hidden In-chip Operating System

Steven J. Vaughan-Nichols, writing for ZDNet: Matthew Garrett, the well-known Linux and security developer who works for Google, explained recently that, "Intel chipsets for some years have included a Management Engine [ME], a small microprocessor that runs independently of the main CPU and operating system. Various pieces of software run on the ME, ranging from code to handle media DRM to an implementation of a TPM. AMT [Active Management Technology] is another piece of software running on the ME." [...] At a presentation at Embedded Linux Conference Europe, Ronald Minnich, a Google software engineer reported that systems using Intel chips that have AMT, are running MINIX. So, what's it doing in Intel chips? A lot. These processors are running a closed-source variation of the open-source MINIX 3. We don't know exactly what version or how it's been modified since we don't have the source code. In addition, thanks to Minnich and his fellow researchers' work, MINIX is running on three separate x86 cores on modern chips. There, it's running: TCP/IP networking stacks (4 and 6), file systems, drivers (disk, net, USB, mouse), web servers. MINIX also has access to your passwords. It can also reimage your computer's firmware even if it's powered off. Let me repeat that. If your computer is "off" but still plugged in, MINIX can still potentially change your computer's fundamental settings. And, for even more fun, it "can implement self-modifying code that can persist across power cycles." So, if an exploit happens here, even if you unplug your server in one last desperate attempt to save it, the attack will still be there waiting for you when you plug it back in. How? MINIX can do all this because it runs at a fundamentally lower level. [...] According to Minnich, "there are big giant holes that people can drive exploits through." He continued, "Are you scared yet? If you're not scared yet, maybe I didn't explain it very well, because I sure am scared." Also read: Andrew S. Tanenbaum's (a professor of Computer Science at Vrije Universiteit) open letter to Intel.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot