Friday, 25 May 2018

Feral's GameMode May Soon Have Soft Real-Time Capabilities

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 10,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter or contacted via MichaelLarabel.com.



Read the full article here by Phoronix

Tuesday, 22 May 2018

Rune - Haiku images on ARM

Icon

Up until recently, Haiku builds for ARM have targetted individual ARM boards. The compile process for ARM images required two things: an architecture, and a target board (such as the Raspberry Pi 2). This board setting adjusted a large number of defines throughout Haiku at compile time to set the operating system up for the target ARM device. The board selection also handled placing all the propriety bits (a lot of which have sketchy licensing) into the Haiku image during compile. Haiku would then have to distribute these files. (sketchy licensing and all)

Over the past few years, François Revol, Ithamar R. Adema, and others have worked to add Flat Device Tree (FDT) support to Haiku. FDT’s enable operating systems to obtain core knowledge of the devices they run on by simply swapping one or more compiled binary files. These files describe critical things the operating system needs to know about the hardware they run on. Really important things such as what devices exist at what memory locations. (Think video frame buffers, serial ports, etc)

In a series of cryptic commits in July 2017, I removed these board-centric build steps with grand plans of making testing (and running) Haiku on ARM devices easier.

No, this does not mean Haiku now runs on ARM, as it has been able to do that for a while now. The goal of these changes and improvements is to speed up development of Haiku's ARM build, and to simplify the distribution of ARM builds into a single, generic ARMv7 image.



Read the full article here by OSNews

SafeKeeper: protecting web passwords using trusted execution environments

SafeKeeper: protecting web passwords using trusted execution environments Krawiecka et al., WWW’18

(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

Today’s paper is all about password management for password protected web sites / applications. Even if we assume that passwords are salted and hashed in accordance with best practice (NIST’s June 2017 digital identity guidelines now mandate the use of keyed one-way functions such as CMAC), an adversary that can obtain a copy of the back-end database containing the per-user salts and the hash values can still mount brute force guessing attacks against individual passwords.

SafeKeeper goes a lot further in its protection of passwords. What really stands out is the threat model. SafeKeeper keeps end user passwords safe even when we assume that an adversary has unrestricted access to the password database. Not only that, the adversary is able to modify the content sent to the user from the web site (including active content such as client-side scripts). And not only that! The adversary is also able to read all content sent to the backend servers, including any content encrypted by a TLS session key. The adversary can also execute any software they like on the backend server(s), and can launch state-of-the-art phishing attacks. Such as adversary for example, could be one that has taken over a backend server, or an operator of the service, or an operator of a service that a client has been hoodwinked into connecting to in lieu of the real thing.

Given an adversary like that, how is it possible to design a system that protects users’ passwords !? The adversary should not be able to obtain a user password though any means other than guessing. Offline guessing should be computationally infeasible irrespective of password strength, and online guessing must be throttled (rate limited) regardless of the adversary’s computational capabilities. The end user must also be able to tell whether they are communicating safely with a trustworthy backend service.

The answer is a clever use of trusted execution environments (Intel SGX in this case).

High-level design of SafeKeeper

On the server side, SafeKeeper’s password protection service computes cipher-based message authentication codes (CMAC) on salted passwords before storing them in the database. To perform a guessing attack against such passwords, an adversary also requires the CMAC key. SafeKeeper randomly generates this and protects it within an SGX enclave.

To prevent repeated guessing attacks using the online service, SafeKeeper enforces rate limiting within the trusted execution environment.

To prevent eavesdropping on passwords before they arrive at the password protection service, SafeKeeper establishing a secure end-to-end channel directly between the client and the enclave.

To ensure the client can trust the backend service it is interacting with, remote attestation is used to validate that the client is communicating with a genuine SafeKeeper Trusted Application.

To ensure that the user knows it is safe to their credentials, the SafeKeeper browser add-on displays trust information for the backend service, and highlights to the user the input fields that will be protected.

Server-side protection

The server-side is implementation as an SGX enclave.

  • The init function is called whenever the enclave is started. If this is the first time the enclave has been started, Intel’s hardware random number generator is used to generate a new strong CMAC key.
  • The shutdown function seals the enclave state (CMAC key, the map of salts, and password attempt counters) so that it can be securely stored outside of the enclave. This sealed data can be restored in the enclave on subsequent calls to init.
  • The process function takes as input a password (encrypted on the client-side) and corresponding salt value, it calculates the CMAC and returns the result. This result can be stored by the database when the password is first set, and then tested against in future login attempts.

Rate-limiting is done on a per-salt basis (no user ids are passed to the enclave). Assuming a salt-per-user, this is equivalent to rate-limiting by user. SafeKeeper uses a quantized maximum rate limit for each salt, which allows a fixed number of attempts within a pre-defined time interval but doesn’t mandate a delay between the attempts. Once the number of attempts has been exceeded, the user must wait until the next time interval before they can try again.

Remote attestation

Via a process known as remote attestation, a client can verify it is communicating with an endpoint that is running a genuine copy of the SafeKeeper Trusted Application inside a genuine SGX enclave.

Remote attestation is the process through which one party, the verifier, can ascertain the precise hardware and software configuration of a remote party, the prover. The objective is to provide the verifier with sufficient information to make a trust decision about the prover. SGX supports remote attestation by providing verifiers with a signed quote from the enclave, which includes the enclave’s precise identity (MRENCLAVE value) and the enclave’s public key. The verifier can then validate this quote using the Intel Attestation Service (IAS), and can then establish an end-to-end encrypted channel directly to the enclave.

The configuration information in the quote is checked by the client against a whitelist of known SafeKeeper Trusted Applications (TAs). The same TA can be used as-is by many websites, so this list should be short. Since the key agreement step to establish a session key is cryptographically bound to the TA’s remote attestation, the adversary cannot perform a man-in-the-middle attack. The client encrypts the user-supplied password using the session key, and on the server it is this encrypted password that is passed as input to the process function of the enclave, which can decrypt using the shared key.

Client-side assurances

The client-side mechanism is implemented as a browser addon (e.g., a Chrome extension). The server specifies which input fields should be encrypted and sent to the SafeKeeper TA. The addon parses this information and encrypts any text entered into these fields.

A rogue server might specify the wrong field names though, leaving the actual password field unprotected. To prevent this, SafeKeeper visually highlights the fields to be encrypted, greying out the rest of the page.

To avoid an adversary spoofing the highlighting performed by SafeKeeper, the user is required to click on the extension icon to activate the highlighting.

This click cannot be detected or prevented by the adversary (as it is outside of the browser DOM). After the user has clicked, the SafeKeeper icon is again changed to indicate that it is in the highlighting mode… The user is thus assured that a password entered into such an input field will always be protected by SafeKeeper, regardless of the identity of the website or the behaviour of the server.

The addon is stateless and user-agnostic, which means it could also be directly integrated into web browsers.

What about…?

A malicious server operator can include javascript in the page that attempts to read the password as it is typed by the user. SafeKeeper provides a mode which will disable scripts for an individual website and then reload the page with all scripts but SafeKeeper’s blocked. It’s not perfect (there are race conditions), and on many modern sites disabling javascript interferes with the usability of the web page, so SafeKeeper allows client-side scripts by default. An alternative discussed in the extended technical report is to change the UI so that the password is directly entered into a SafeKeeper popup controlled by the extension. For this to work, users must be trained to only enter password data into SafeKeeper popups.

I also wondered about the enclave becoming a single point of failure, and how SafeKeeper would provide high availability. The answers can be found in sections 7.1 and 7.2 of the extended technical report. SafeKeeper supports backup and recovery across a set of key-holding enclaves, with unanimous approval of all current key holding enclaves required to add a new one. If a machine fails, an operator must provide a proof that the failed machine has been revoked in order to change the set of key holding enclaves. If desired, the backup and recovery protocol can be extended to support multiple concurrent primaries. In this case, the overall permissible password checking rate is divided amongst the primaries.

The SafeKeeper site including links to all technical papers and the source code can be found at https://ssg.aalto.fi/research/projects/passwords/.



Read the full article here by the morning paper

Spectre and Meltdown variant 4: Microsoft, Google and Intel reveal new Speculative Store Bypass chip vulnerability

Just when you thought you could forget about the Spectre and Meltdown chip vulnerabilities, yet another variant has been discovered. Known as Speculative Store Bypass, the vulnerability affects chips from AMD and Intel, as well as Power 8, Power 9 and System z processors. The vulnerability has been assigned CVE-2018-3639, and successful exploitation would mean that an attacker could gain access to data. The attack can be carried out through a "language-based runtime environment" such as JavaScript. Some patches exist while others are in development, and they include the same performance hit associated with patches for the previous vulnerabilities. See… [Continue Reading]


Read the full article here by Betanews

Friday, 18 May 2018

No-Battery HD Video Streaming Does It with Backscatter

What if Google Glass didn’t have a battery? That’s not too far fetched. This battery-free HD video streaming camera could be built into a pair of eyeglass frames to stream HD video to a nearby phone or other receiver using no bulky batteries or external power source. Researchers at the University of Washington are using backscatter to pull this off.

The problem is that a camera which streams HD video wirelessly to a receiver consumes over 1 watt due to the need for a digital processor and transmitter. The researchers have separated the processing hardware into the receiving unit. They then send the analog pixels from the camera sensor directly to backscatter hardware. Backscatter involves reflecting received waves back to where they came from. By adding the video signal to those reflected waves, they eliminated the need for the power-hungry transmitter. The full details are in their paper (PDF), but here are the highlights.

Battery-free camera design approach

On the camera side, the pixel voltages (CAM Out) are an analog signal which is fed into a comparator along with a triangular waveform. Wherever the triangle wave’s voltage is lower than the pixel voltage, the comparator outputs a 0, otherwise, it outputs a 1. In this way, the pixel voltage is converted to different pulse widths. The triangular waveform’s minimum and maximum voltages are selected such that they cover the full possible range of the camera voltages.

The sub-carrier modulation with the XOR gate in the diagram is there to address the problem of self-interference. This is unwanted interference from the transmitter of the same frequency as the carrier. And so the PWM output is converted to a different frequency using a sub-carrier. The receiver can then filter out the interference. The XOR gate is actually part of an FPGA which also inserts frame and line synchronization patterns.

They tested two different implementations with this circuit design, a 112 x 112 grayscale one at up to 13 frames per second (fps) and an HD one. Unfortunately, no HD camera on the market gives access to the raw analog pixel outputs so they took HD video from a laptop using USB and ran that through a DAC and then into their PWM converter. The USB limited it to 10 fps.

The result is that video streaming at 720p and 10 fps uses as low as 250 μW and can be backscattered up to sixteen feet. They also simulated an ASIC which achieved 720p and 1080p at 60 fps using 321 μW and 806 μW respectively. See the video below for an animated explanation and a demonstration. The resulting video is quite impressive for passive power only.

If the University of Washington seems familiar in the context of backscatter, that’s because we’ve previously covered their battery-free (almost) cell phone. Though they’re not the only ones experimenting with it. Here’s where backscatter is being used for a soil network. All of this involves power harvesting, and now’s a great time to start brushing up on these concepts and building your own prototypes. The Hackaday Prize includes a Power Harvesting Challenge this year.



Read the full article here by Hack a Day

Wednesday, 16 May 2018

Flutter beta 3 released, Fuchsia gets initial ART support

Icon

Tomorrow at Google I/O’s developers keynote, we will see the official launch of Flutter Beta 3. This beta is an important step towards the 1.0 build for Flutter, with a heavy focus on solidifying the improvements that Google has been working since they launched the initial Flutter Beta.

First and foremost among those improvements is the implementation of the Dart 2 programming language. The second version of Dart was designed specifically with the challenges that early Flutter builds ran into in mind, and brings some substantial changes, including strong typing, cleaner syntax, and an updated developer tool chain.

Flutter and Dart are also important parts of Fuchsia. And on that note, might I point out that Fuchsia is getting support for ART, the Android Runtime?



Read the full article here by OSNews

Friday, 11 May 2018

Microsoft Enabling Javascript in Excel Has Security Pros Anxious


On Monday at its Build conference in Seattle, Microsoft announced a host of software products and updates related to buzzy technologies like machine learning and mixed reality. But the company also debuted a number of smaller upgrades to existing products, including a new Excel feature that allows users to execute custom JavaScript functions in spreadsheets. That might be helpful for spreadsheet junkies, but it makes security researchers cringe.

The move is well-intentioned; it should make Excel even more powerful and capable by allowing users to integrate expanded information from the web and third-party services—think bank account balances or stock prices. But JavaScript also creates more interconnection and more access points—meaning more points of potential vulnerability. It's already a bit of a web security nightmare. And on top of that, attackers have long shown their willingness to exploit customization and automation features in Excel—and other Microsoft Office programs—to create malicious files for phishing and other attacks. The ubiquity of Microsoft Office files make them the perfect vector for tricking victims and wreaking havoc.

Streamlining the tool for legitimate users could make it more effective for attackers.

"JavaScript opens up another attack vector for malicious documents, and is yet another thing that we as defenders will have to watch out for other than what Excel can already do," says Chase Dardaman, a malware analysis researcher based in Texas. "The main concern is that since JavaScript usage in Excel is so new we do not know what controls Microsoft will put around it. They will need to make it more open and easier to use than it currently is, and that could open up new attack vectors."

JavaScript is an extremely popular programming language, and has been around since 1995. It's often used to power common features on websites like multimedia modules and form submissions. But if JavaScript components aren't contained and limited in what they can access, attackers can potentially exploit them to access and manipulate systems and execute malicious code. In just one example, hackers are known to exploit insecure JavaScript implementations to pull of cross-site scripting (XSS), which lets attackers hijack websites to steal data or serve malware to innocent visitors. JavaScript has also been around for so long that lots of buggy prefab code exists in libraries around the web, and frequently gets incorporated into unsuspecting websites.

Right now Microsoft has only released the expanded Excel functionality to members of its "Office Insiders" program, so it still has time to refine the implementation. In particular, observers say they hope Microsoft will turn JavaScript execution off by default, so Excel only allows the custom functions to run after specifically prompting a user to approve—or deny—it each time. A company spokesperson said in a statement to WIRED that, "We take the security of our customers seriously, and by design, only trusted logic can execute within the context of a custom function—with appropriate controls to gate usage.”

As the company works on the new feature, analysts are already exploring what attackers could do with it if and when it hits the mainstream market. Within a day of the announcement this week, Dardaman published a proof of concept that showed how the new functionality could be programmed to run the CoinHive cryptomining program through an Excel document. Dardaman was even able to set things up so that the mining quietly relaunched each time a user opened the compromised Excel file.

Thankfully, the pre-release of JavaScript for Excel makes it difficult to share tainted files, but researchers say that protection largely stems from JavaScript for Excel still being in a testing phase. Eventually, Dardaman suspects that Microsoft will refine the feature and make it easier to use. Streamlining the tool for legitimate users could make it more effective for attackers.

"I understand what Microsoft is going for with this, but I believe the harm vastly outweighs the good," says Mitch Edwards, a threat intelligence analyst and researcher. "Accessibility has been put before security for a long time. We in the security community are still trying to get a grip on other attack vectors in the Office Suite, and the addition of JavaScript functionality to Excel adds another tool to the belt of the attacker."

Observers note that a cryptominer isn't the only thing an attacker could program into a JavaScript-enabled Excel file. Phishers and targeted attackers looking to gain access to a system or spread malware could lean on these innocuous-looking file downloads as the jumping off point to achieving a number of goals, from data theft to gaining remote control of a victim's device. Just look at how attackers have exploited the Office automation feature "Macros" for years, building special "Macro malware" to spread in spam emails and ZIP files.

"With all of the badness JavaScript can cause, I'd imagine that Microsoft is going to have to handle it the same way that they handle Macros, which is to have it turned off by default," says Crane Hassold, a threat intelligence manager at the security firm PhishLabs, who previously worked as a digital behavior analyst for the FBI. If the company doesn't, he says, the feature will become popular with "not just phishers, but cyber threat actors in general."

Hacking Microsoft



Read the full article here by Wired Top Stories

Friday, 4 May 2018

CineMAH presenta:INFINITY GUANTY persy a scuoly

CineMAH presenta:INFINITY GUANTY persy a scuoly

04/05/2018

Vi dico subito che questa pseudo recensione contiene spoiler, glutine, tracce di frutta secca in guscio e lattosio. Non dite che non vi avevo avvertiti. E comunque, dai, ma che vi frega, tanto è un film Marvel, mica LA MOGLIE DEL SOLDATO! Anche se non lo avete visto, ve lo avranno raccontato i milioni di spoileratori professionisti che vivono solo per andare al cinema prima di tutti, prima anche del regista e degli attori, solo per il gusto di dirvi “Ah, ma non lo sai che Furenzio è MMOOORTOOOOO?” E Furenzio è lui.

Buio in sala. Si comincia. Con quella cosa che mi piace un sacchissimo che è il logo dei Marvel Studios. Quando lo vedo, con la sua musica trionfale, per me son già a posto così.

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

 

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

dav

 



Read the full article here by Come Non Detto

Google Releases Open Source Framework For Building 'Enclaved' Apps For Cloud

An anonymous reader quotes a report from Ars Technica: Today, Google is releasing an open source framework for the development of "confidential computing" cloud applications -- a software development kit that will allow developers to build secure applications that run across multiple cloud architectures even in shared (and not necessarily trusted) environments. The framework, called Asylo, is currently experimental but could eventually make it possible for developers to address some of the most basic concerns about running applications in any multi-tenant environment. Container systems like Docker and Kubernetes are designed largely to allow untrusted applications to run without exposing the underlying operating system to badness. Asylo (Greek for "safe place") aims to solve the opposite problem -- allowing absolutely trusted applications to run "Trusted Execution Environments" (TEEs), which are specialized execution environments that act as enclaves and protect applications from attacks on the underlying platform they run on.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot