Read more of this story at Slashdot.
Read the full article here by Slashdot
Read more of this story at Slashdot.
Read more of this story at Slashdot.
Intel has released two ready-to-use RealSense depth cameras, the D415 and the D435, that can add 3D capabilities to any device or machine. They both come in a USB-powered form factor and are capable of processing depth in real time, thanks to the chipmaker's new RealSense vision processor D4. The models work indoors and outdoors in any lighting environment, so they can be used for almost any machine that needs a depth camera. Those include drones meant to soar the skies and robots with AR/VR features.
Intel says the cameras' target audiences aren't just developers and manufacturers, but also makers and educators, since they're easy to use and will work as soon as you plug them in. Also, it comes with Intel's RealSense SDK 2.0, which is now a cross-platform, open source SDK.
Intel RealSense VP Sagi Ben Moshe said in a statement:
"Many of today's machines and devices use 2D image recognition-based computer vision, but with Intel RealSense's best-in-class depth technology, we are redefining future technologies to 'see' like a human, so devices and machines can truly enrich people's lives. With its compact, ready-to-use form, the Intel RealSense D400 Depth Camera series not only makes it easy for developers to build 3D depth sensing into any design, but they are also ready to be embedded into high-volume products."
The D415 and the D435 are now available for pre-order for $149 and $145, respectively. D415 has a narrow field of view and a rolling shutter that scans its environment from one side to the other to take an image. It works best when dealing with small objects and anything that needs precise measurements. D435, on the other hand, has a wider field of view and has a global shutter that takes images all at once. That makes it ideal for capturing depth perception of objects in motion and for covering big areas, since it minimizes blind spots.
Source: Intel
Read more of this story at Slashdot.
Google engineers have transitioned from Ubuntu to Debian for their internal machines.
The company, which has been using Goobuntu, a customised version of Ubuntu, for years, announced last year that it would be switching to gLinux, based on Debian Testing.
MuyLinux reports (Spanish) that The Debconf’17 held last August revealed plans for the move as well as a roadmap for the project and plans for a smooth transition.
That process has now begun, as the company moves from a “light-skinned” distro which it has no contribution to. Google used Ubuntu’s Long Term Support (LTS) builds as a customer of the Ubuntu Advantage Program but was not active in the community.
In the case of Debian, it will send changes upstream as an active contributor, whilst running on the Test stream so it can take advantage of faster testing of new builds.
This might mean that there are more problems for users than the safe pair of hands of the commercial Ubuntu, but it also means that Google will have complete autonomy over how to fix them, and benefit the community at the same time.
For Canonical, it’s a big loss. Though Ubuntu has a huge number of clients in servers, cloud, and remains the biggest distro for the limited PC market, the loss of a paying customer the size of Google won’t go unnoticed.
Source: http://ift.tt/2Dd4pIi
Submitted by: Arnfried Walbrecht
Sometimes when you walk into a hackerspace you will see somebody’s project on the table that stands so far above the norm of a run-of-the-mill open night on a damp winter’s evening, that you have to know more. If you are a Hackaday scribe you have to know more, and you ask the person behind it if they have something online about it to share with the readership.
[Jolar] was working on his 3D scanner project on just such an evening in Oxford Hackspace. It’s a neatly self-contained unit in the form of a triangular frame made of aluminium extrusions, into thich are placed a stack of Raspberry Pi Zeros with attached cameras, and a very small projector which needed an extra lens from a pair of reading glasses to help it project so closely.
The cameras are arranged to have differing views of the object to be scanned, and the projector casts an array of randomly created dots onto it to aid triangulation from the images. A press of a button, and the four images are taken and, uploaded to a cloud drive in this case, and then picked up by his laptop for processing.
A Multi-view Stereo (MVS) algorithm does the processing work, and creates a 3D model. Doing the processing is VisualSFM, and the resulting files can then be viewed in MeshLab or imported into a CAD package. Seeing it in action the whole process is quick and seamless, and could easily be something you’d see on a commercial product. There is more to come from this project, so it is definitely one to watch.
Four Pi boards may seem a lot, but it is nothing to this scanner with 39 of them.
While Intel is at the center of the Spectre/Meltdown fiasco, AMD's chips are also affected by the CPU vulnerabilities. The company previously said that the risk of exploit using variant 2 was near zero due to its chips' architecture. But in its latest announcement, it said that because both variants are still "applicable to AMD processors," it also plans to release patches for the second variant to be absolutely safe. AMD already provided PC manufacturers its fix for the first Spectre version, and Microsoft has begun rolling it out. The chipmaker also said it's working with Redmond to address a problem that delayed the distribution of patches for its older processors.
Since the second version of Spectre needs a different fix, AMD will provide its customers and partners for Ryzen and EPYC processors with a patch for its chips starting this week. Firmware updates for its older chips will follow in the coming weeks. If you use Linux, you might get it sooner than you think, since Linux vendors have already started releasing OS patches for the second variant. You might have to wait a bit if you're a Windows user, though, since AMD is still working out distribution timing with Microsoft.
Despite deciding to release a patch for version 2, the company reiterated that its chips' architecture will make it very difficult for attackers to use the exploit. It also maintained that Meltdown isn't applicable to AMD chips at all. AMD's processors aren't "susceptible" to Meltdown, the chipmaker wrote, "due to [the company's] use of privilege level protections within paging architecture." Since "no mitigation is required" for variant 3, it won't be creating a patch for the vulnerability.
Update: AMD clarified that it never said its chips were not susceptible to variant 2.
Via: Reuters
Source: AMD
While the whole industry is scrambling on Spectre, Meltdown focused most of the spotlight on Intel and there is no shortage of outrage in Internet comments. Like many great discoveries, this one is obvious with the power of hindsight. So much so that the spectrum of reactions have spanned an extreme range. From “It’s so obvious, Intel engineers must be idiots” to “It’s so obvious, Intel engineers must have known! They kept it from us in a conspiracy with the NSA!”
We won’t try to sway those who choose to believe in a conspiracy that’s simultaneously secret and obvious to everyone. However, as evidence of non-obviousness, some very smart people got remarkably close to the Meltdown effect last summer, without getting it all the way. [Trammel Hudson] did some digging and found a paper from the early 1990s (PDF) that warns of the dangers of fetching info into the cache that might cross priviledge boundaries, but it wasn’t weaponized until recently. In short, these are old vulnerabilities, but exploiting them was hard enough that it took twenty years to do it.
Building a new CPU is the work of a large team over several years. But they weren’t all working on the same thing for all that time. Any single feature would have been the work of a small team of engineers over a period of months. During development they fixed many problems we’ll never see. But at the end of the day, they are only human. They can be 99.9% perfect and that won’t be good enough, because once hardware is released into the world: it is open season on that 0.1% the team missed.
The odds are stacked in the attacker’s favor. The team on defense has a handful of people working a few months to protect against all known and yet-to-be discovered attacks. It is a tough match against the attackers coming afterwards: there are a lot more of them, they’re continually refining the state of the art, they have twenty years to work on a problem if they need to, and they only need to find a single flaw to win. In that light, exploits like Spectre and Meltdown will probably always be with us.
Let’s look at some factors that paved the way to Intel’s current embarrassing situation.
In Intel’s x86 lineage of processors, the Pentium Pro in 1995 was first to perform speculative execution. It was the high-end offering for demanding roles like multi-user servers, so it had to keep low-privilege users’ applications from running wild. But the design only accounted for direct methods of access. The general concept of side-channel attacks were well-established by that time in the analog world but it hadn’t yet been proven applicable to the digital world. For instance, one of the groundbreaking papers in side-channel attacks, pulling encryption keys out of certain cryptography algorithm implementations, was not published until a year after the Pentium Pro came to market.
Computer security was a very different and a far smaller field in the 1990s. For one, Internet Explorer 6, the subject of many hard lessons in security, was not released until 2001. The growth of our global interconnected network would expand opportunities and fuel a tremendous growth in security research on offense and defense, but that was still years away. And in the early 1990s, software security was in such a horrible state that only a few researchers were looking into hardware.
During this time when more people were looking harder at more things, Intel’s never-ending quest for speed inadvertently made the vulnerability easier to exploit. Historically CPU performance advancements have outpaced those for memory, and their growing disparity was a drag on overall system performance. CPU memory caches were designed to help climb this “memory wall”, termed in a 1994 ACM paper. One Pentium Pro performance boost came from moving its L2 cache from the motherboard to its chip package. Later processors added a third level of cache, and eventually Intel integrated everything into a single piece of silicon. Each of these advances made cache access faster, but that also increased the time difference between reading cached and uncached data. On modern processors, this difference stands out clearly against the background noise, illustrated in this paper on Meltdown.
Flash-forward to today: timing attacks against cache memory have become very popular. Last year all the stars aligned as multiple teams independently examined how to employ the techniques against speculative execution. The acknowledgements credited Jann Horn of Google Project Zero as the first to notify Intel of Meltdown in June 2017, triggering investigation into how to handle a problem whose seeds were planted over twenty years ago.
This episode will be remembered as a milestone in computer security. It is a painful lesson with repercussions that will continue reverberating for some time. We have every right to hold industry-dominant Intel to high standards and put them under spotlight. We expect mitigation and fixes. The fundamental mismatch of fast processors that use slow memory will persist, so CPU design will evolve in response to these findings, and the state of the art will move forward. Both in how to find problems and how to respond to them, because there are certainly more flaws awaiting discovery.
So we can’t stop you if you want to keep calling Intel engineers idiots. But we think that the moral of this story is that there will always be exploits like these because attack is much easier than defense. The Intel engineers probably made what they thought was a reasonable security-versus-speed tradeoff back in the day. And given the state of play in 1995 and the fact that it took twenty years and some very clever hacking to weaponize this design flaw, we’d say they were probably right. Of course, now that the cat is out of the bag, it’s going to take even more cleverness to fix it up.
In the wake of Meltdown and Spectre, Intel yesterday released new microcode binaries for Linux systems.
There isn't any official change-log, so it's difficult to comment exactly on what has changed, but presumably some security fixes around Spectre/Meltdown given the timing. I'll be doing some before/after benchmarks on some Intel Linux systems looking to see if the updated microcode causes any noticeable impact.
The updated Intel x86 CPU microcode binaries can be fetched from
downloadcenter.intel.com.
Read more of this story at Slashdot.
A little more than a day since Google Project Zero went public with its findings regarding a major security flaw in Intel (and others) chip designs, the company announced that it is already is pushing out patches to eliminate the vulnerability. Intel has "already issued updates for the majority of processor products introduced within the past five years," per the company press release, and expects to have 90 percent of affected chips produced within the past five years patched by the end of the week.
The flaw, which afflicts chips made over the past decade, enables ordinary processes to determine the layout of protected kernel memory. This "software analysis method", as Intel describes the flaw, allows a pair of exploits, dubbed "Meltdown" and "Spectre," to swipe data from other apps on vulnerable devices -- be they PCs, servers or mobile phones -- running Intel, ARM or AMD chips.
The solution cooked up by Intel and its partners so far entails severing the link between the kernel and these processes, though that could have a dramatic impact on a patched chip's operating speed. The company asserts that the impacts will be "highly workload-dependent" and not particularly noticeable by the average consumer.
Update: Microsoft says it will release an update for Surface devices to protect them against the chip vulnerability. The company also explains that it "has not received any information to indicate that these vulnerabilities have been used to attack customers at this time."
You can check the list of Surface gear that will receive the patch at the link above, but Microsoft says the updates will be available devices running Windows 10 with Windows Update or through the Microsoft Download Center.
Source: Intel
While
Haiku OS is incredibly close to delivering their long-awaited beta, it didn't end up materializing in 2017 but they still made much headway into this open-source BeOS-inspired operating system.
The Haiku OS project has put out their latest activity report highlighting work done through the end of December. Some of the recent progress includes polishing the user-interface and other UI improvements, the package repositories are now being generated automatically, fixes to the Haiku standard library, their USB 3.0 stack is getting into better shape, the TCP stack has received many fixes, Btrfs file-system support continues getting better, and the UEFI support for Haiku OS is also coming along.
More details on the latest adventures in the Haiku OS space can be found via the activity reports at
Haiku-OS.org. It will be interesting to try out Haiku OS once they reach their beta state in early 2018.