Wednesday, 21 December 2016

Uber Admits To Self-driving Car 'Problem' in Bike Lanes As Safety Concerns Mount

Uber has admitted that there is a "problem" with the way autonomous vehicles cross bike lanes, raising serious questions about the safety of cyclists days after the company announced it would openly defy California regulators over self-driving vehicles. From a report on The Guardian: An Uber spokeswoman said on Monday that engineers were working to fix a flaw in the programming that advocates feared could have deadly consequences for cyclists. Uber began piloting its self-driving vehicles in its home town of San Francisco last week, despite state officials' declaration that the ride-share company needed special permits to test its technology. On day one, numerous autonomous vehicles -- which have a driver in the front seat who can take control -- were caught running red lights and committing a range of traffic violations. Despite threats of legal action from the department of motor vehicles (DMV) and California's attorney general, Kamala Harris, Uber refused to back down on Friday, claiming its rejection of government authority was "an important issue of principle."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Morgan Freeman To Voice Mark Zuckerberg's Jarvis

Facebook CEO Mark Zuckerberg recently demoed his homemade artificial intelligence assistant Jarvis for Fast Company, and while their report didn't mention anything specific about the assistant's synthesized voice at the time, we have now learned that Morgan Freeman will be the voice behind Jarvis. Robert Downey Jr. originally volunteered to be the new voice of Jarvis under certain conditions, but Zuckerberg decided to let the public weigh in on Facebook. With more than 50,000 comments, Morgan Freeman emerged victorious. USA Today reports: Zuckberg told Fast Company he called Freeman and said: "Hey, I posted this thing, and...thousands of people want you to be the voice. Will you do it?â(TM)" Freeman told Zuckerberg: "Yeah, sure." Of course, Freeman has other starring voice roles in the tech world. He's one of the celebrity voices on Google's navigation app Waze. Facebook has not disclosed whether Freeman is getting paid, according to Fast Company.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Tuesday, 20 December 2016

Intel, Microsoft Bring Merged Reality To Life

Through a new initiative, called Project EVO, we are harnessing our collective innovation to showcase what is uniquely possible with the PC and, together, set the course for the future. We believe that computing must evolve to become capable of even more unbelievable things, including smarter voice interaction, virtual and merged reality experiences for all, more powerful gaming, strengthened security and identity protection, and true always-connected computing. This is at the very heart of what we do at Intel: We make amazing experiences possible through technology. With Intel architecture, these experiences come to life across the broadest range of devices – spanning the growing Internet of Things to monster gaming rigs, to always-connected sleek notebooks and versatile 2 in 1s. And, in a world of over 15 million Windows desktop applications, our three decades of application compatibility on the Windows platform ensures that customers will the see reliability, performance and battery life that they expect across all form factors. Comments

Read the full article here by [H]ardOCP News/Article Feed

Netflix Keeping Bandwidth Usage Low By Encoding Its Video With VP9 and H.264/AVC Codecs

Netflix announced last week that it is getting offline video downloads support. The company has since shared that it is using VP9 video compression codec to ensure that the file sizes don't weigh a lot. An anonymous reader shares an article on Slashgear (edited): For streaming content, Netflix largely relies on H.264/AVC to reduce the bandwidth, but for downloading content, it uses VP9 encoding. VP9 can allow better quality videos for the same amount of data needed to download. The challenge is that VP9 isn't supported by all streaming providers -- it is supported on Android devices and via the Chrome browser. So to get around that lack of support on iOS, Netflix is offering downloads in H.264/AVC High whereas streams are encoded in H.264/AVC Main on such devices. Netflix chooses the optimal encoding format for each title on its service after finding, for instance, that animated films are easier to encode than live-action. Netflix says that H.264 High encoding saves 19% bandwidth compared to other encoding standards while VP9 saves 36%.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Start building Actions on Google

Posted by Jason Douglas, PM Director for Actions on Google

The Google Assistant brings together all of the technology and smarts we've been building for years, from the Knowledge Graph to Natural Language Processing. To be a truly successful Assistant, it should be able to connect users across the apps and services in their lives. This makes enabling an ecosystem where developers can bring diverse and unique services to users through the Google Assistant really important.

In October, we previewed Actions on Google, the developer platform for the Google Assistant. Actions on Google further enhances the Assistant user experience by enabling you to bring your services to the Assistant. Starting today, you can build Conversation Actions for Google Home and request to become an early access partner for upcoming platform features.

Conversation Actions for Google Home

Conversation Actions let you engage your users to deliver information, services, and assistance. And the best part? It really is a conversation -- users won't need to enable a skill or install an app, they can just ask to talk to your action. For now, we've provided two developer samples of what's possible, just say "Ok Google, talk to Number Genie " or try "Ok Google, talk to Eliza' for the classic 1960s AI exercise.

You can get started today by visiting the Actions on Google website for developers. To help create a smooth, straightforward development experience, we worked with a number of development partners, including conversational interaction development tools API.AI and Gupshup, analytics tools DashBot and VoiceLabs and consulting companies such as Assist, Notify.IO, Witlingo and Spoken Layer. We also created a collection of samples and voice user interface (VUI) resources or you can check out the integrations from our early access partners as they roll out over the coming weeks.

Introduction to Conversation Actions by Wayne Piekarski

Coming soon: Actions for Pixel and Allo + Support for Purchases and Bookings

Today is just the start, and we're excited to see what you build for the Google Assistant. We'll continue to add more platform capabilities over time, including the ability to make your integrations available across the various Assistant surfaces like Pixel phones and Google Allo. We'll also enable support for purchases and bookings as well as deeper Assistant integrations across verticals. Developers who are interested in creating actions using these upcoming features should register for our early access partner program and help shape the future of the platform.

Build, explore and let us know what you think about Actions on Google! And to say in the loop, be sure to sign up for our newsletter, join our Google+ community, and use the “actions-on-google” tag on StackOverflow.


Read the full article here by Android Developers Blog

NVIDIA's GeForce 376.33 WHQL Drivers Fix Multiple Kernel Faults; Update ASAP

If you're one of those people who doesn't regularly update their graphics card drivers, and you're rocking an NVIDIA graphics card, you really should update your drivers to the latest WHQL version, 376.33. The release notes and a security bulletin issued by the company point towards the fixing of multiple detected kernel layer (nvlddmkm.sys) vulnerabilities in NVIDIA's previous driver releases, which could "Lead to a Denial of Service, Escalation of Privileges, or Both".

In total, there are seven reported vulnerabilities as having been fixed, with NVIDIA acknowledging contributions in the issues' detection from engineers with Google Project Zero and Cisco Talos.




Read the full article here by techPowerUp!

Monday, 19 December 2016

Amazon a Vercelli: un centro distribuzione e 600 posti di lavoro  

«Per far fronte alla crescente domanda dei clienti in Italia, Amazon ha annunciato la costruzione di un nuovo centro di distribuzione a Vercelli, in Piemonte. Il centro sarà il cuore della rete di distribuzione di Amazon nell’Italia nord-occidentale». Lo scrive la più popolare azienda al mondo di e-commerce in una nota. Se ne parlava da tempo, solo adesso è ufficiale. Ed è una straordinaria boccata di ossigeno per l’occupazione in questa fetta del Piemonte. 

 

«Entro l’autunno del 2017, per soddisfare l’aumento della domanda dei consumatori italiani ed europei, Amazon potrà disporre di tre centri di distribuzione in Italia: la struttura già operativa a Castel San Giovanni (Piacenza), il nuovo centro di Vercelli e quello di Passo Corese, che è stato recentemente annunciato e che entrerà in funzione entro l’autunno del 2017. Amazon investirà 65 milioni nel nuovo impianto in Piemonte da 100.000 metri quadrati, che contribuirà in modo rilevante allo sviluppo economico del territorio. Qui Amazon ha in programma la creazione di oltre 600 nuovi posti di lavoro entro tre anni dall’apertura. Il polo Amazon sorgerà a Larizzate, nell’area industriale a poche centinaia di metri dal casello autostradale di Vercelli Ovest della E25 (la bretella di Santhià che collega la Torino-Milano alla A26 Gravellona Toce-Genova). 

 

LA STRATEGIA  

«Vercelli rappresenta un altro importante passo avanti nella nostra strategia di sviluppo in Italia, la quale risponde a un aumento continuo e costante nella domanda dei consumatori. I 600 posti di lavoro che abbiamo pianificato per Vercelli si aggiungeranno ai 1.200 posti a tempo indeterminato già esistenti nel nostro centro di distribuzione a Castel San Giovanni e ai 1.200 pianificati nell’hub di Passo Corese», ha dichiarato Roy Perticucci, vice presidente delle Operations di Amazon in Europa. «Questa nuova struttura sarà integrata nella rete di Amazon, che al momento conta 31 centri di distribuzione in sette paesi europei, i quali ci permettono di rispettare l’impegno a garantire consegne veloci ed affidabili per i clienti italiani ed europei. Sono sicuro che questo investimento apporterà benefici sia ai clienti sia all’economia locale, creando nuovi posti di lavoro e offrendo alle piccole e medie imprese italiane che utilizzano il nostro servizio di marketplace un’opportunità per intercettare più facilmente milioni di consumatori in tutta Europa». 

 

LE REAZIONI  

Maura Forte, Sindaco di Vercelli, ha affermato: «Siamo felici di dare il nostro benvenuto a Vercelli ad Amazon. La nuova struttura e i nuovi posti di lavoro rientrano perfettamente nel nostro obiettivo di attrarre nuove opportunità economiche in grado di far crescere l’economia locale. Oltre all’impatto economico positivo, siamo consapevoli che Amazon è un partner strategico profondamente coinvolto con le comunità dei territori in cui è presente». 

 

Sergio Chiamparino, Presidente della Regione Piemonte, commenta: «Il nuovo centro di distribuzione di Amazon a Vercelli rappresenta una significativa occasione per il Piemonte, che porterà alla creazione di centinaia di posti di lavoro. La Regione considera l’investimento strategico per il territorio e si impegna a cooperare con Amazon in modo da creare e mantenere un ambiente positivo che permetta alle aziende di crescere, innovarsi ed espandersi. Sarà un progetto importante, non solo per i significativi risvolti occupazionali, ma anche per tutto il sistema delle pmi locali». 

 

IL SITO  

«Il nuovo centro di distribuzione di Vercelli, insieme a quelli esistenti, permetterà ad Amazon di gestire la domanda attuale e futura, sia in Italia, dove un numero sempre maggiore di famiglie sceglie Amazon.it per gli acquisti, sia in Europa - scrive la società nella nota -. Con lo sviluppo della rete logistica di Amazon, aumentano le opportunità per i venditori italiani e nascono nuove possibilità anche per le piccole imprese che decidono di vendere i propri prodotti sul più grande e più frequentato marketplace del web. Le piccole e medie imprese italiane possono estendere la loro presenza online sui 14 siti web di Amazon, disponibili in 10 lingue diverse, contando su oltre 100 centri di distribuzione in tutto il mondo per raggiungere 304 milioni di clienti in ogni angolo del globo». 

 

L’OCCUPAZIONE  

«L’entrata in funzione del nuovo centro di distribuzione è prevista per l’autunno del 2017. Le nuove posizioni di lavoro saranno presto visibili sul sito www.amazon.jobs. Il salario dei dipendenti Amazon rientra nella fascia alta del settore della logistica, a cui si aggiungono benefit come gli sconti dipendenti per gli acquisti su Amazon.it, l’assicurazione sanitaria privata e l’assistenza medica privata. Amazon offre inoltre ai propri dipendenti benefit all’avanguardia come il Career Choice che prevede la copertura fino al 95% della retta di iscrizione, e del costo dei libri di testo, per corsi di formazione per quattro anni». 

 

INVESTIMENTI IN ITALIA  

Dalla sua apertura nel 2010 a oggi, Amazon ha investito oltre 450 milioni e ha creato 2.000 posti di lavoro in Italia. Il centro di Castel San Giovanni, il primo centro logistico aperto da Amazon in Italia, è stato inaugurato nel 2011. A novembre 2015 Amazon ha aperto il centro di distribuzione urbano di 1.500 metri quadrati a Milano per il servizio ai clienti Amazon Prime Now e lo scorso luglio ha annunciato l’apertura del nuovo centro di distribuzione a Passo Corese (RI) nel Lazio. «Con un investimento pari a 150 milioni e la creazione di 1.200 posti di lavoro entro tre anni dall’apertura - scrivono dall’azienda - la nuova struttura consentirà ad Amazon di gestire la crescente domanda dei clienti e di far fronte alla rapida crescita del catalogo di prodotti di Amazon.it».  

Licenza Creative Commons

Alcuni diritti riservati.



Read the full article here by LASTAMPA.it

Sunday, 18 December 2016

Uniquement le 18/12 : 30% offerts en avantage carte sur le bazar technique (Téléphonie, TV, etc) - Ex: Smartphone 4" iPhone SE 16 Go (avec 30% sur la carte) à 342.3€ en magasin @ Intermarché Longchamps - Rennes (35)

342.3€
489.00€ (-30%)
Localisation : Bretagne (35) Bonjour, un deal en direct d'Intermarché Longchamps à Rennes. 30% en avantage carte uniquement dimanche 18 décembre. Valable sur les téléphones, TV, appareils photos,...

Aussi: -30% sur les champagnes.

Preuve : http://ift.tt/2hZVgYG

Tablettes: http://ift.tt/2i0g81Z

Apple: http://ift.tt/2i06j3Y

Téléphones: http://ift.tt/2gMrw0j


Read the full article here by Dealabs.com : Tous les deals HOT

Wednesday, 14 December 2016

TP-Link Debug Protocol Gives Up Keys To Kingdom

If the headline makes today’s hack sound like it was easy, rest assured that it wasn’t. But if you’re interested in embedded device hacking, read on.

[Andres] wanted to install a custom OS firmware on a cheap home router, so he bought a router known to be reflashable only to find that the newer version of the firmware made that difficult. We’ve all been there. But instead of throwing the device in the closet, [Andres] beat it into submission, discovering a bug in the firmware, exploiting it, and writing it up for the manufacturer.  (And just as we’re going to press: posting the code for the downgrade exploit here.)

This is not a weekend hack — this took a professional many hours of serious labor. But it was made a lot easier because TP-Link left a debugging protocol active, listening on the LAN interface, and not requiring authentication. [Andres] found most of the information he needed in patents, and soon had debugging insight into the running device.

After some heavy-duty static reverse engineering based on the firmware that he downloaded from the manufacturer’s website, he found a buffer overflow opportunity in the code, and managed to run his own code over the debugging link.

Because [Andres] is a security professional, he gets paid for finding vulnerabilities like this, but also for making them sound ominous. Still, he notes that you can only reach the debug protocol over the local LAN, and not from the network at large. So it’s much more likely that you’ll use this exploit to flash the firmware of your choice than it is that any baddies would do so. (It’s not a bug, it’s a feature!) But still, this is an awesome hack!

Thanks to [norber] for the tip!


Filed under: misc hacks, security hacks

Read the full article here by Hack a Day

Helium Rain, a realistic UE4 space simulation game is coming to Linux in 2017, looks brilliant

 GamingOnLinux: The developers of Helium Rain sent word that their realistic space simulation game will fully support Linux at launch.



Read the full article here by Linux Today

Monday, 12 December 2016

Wine 2.0 Takes Shape, First Release Candidate Updates the Mono Engine, More

 softpedia: Wine 2.0 RC1 (Release Candidate 1) prepares the software for its major Wine 2.0 launch, and it also marks the beginning of the code freeze period.



Read the full article here by Linux Today

New Ransomware Offers The Decryption Keys If You Infect Your Friends

MalwareHunterTeam has discovered "Popcorn Time," a new in-development ransomware with a twist. Gumbercules!! writes: "With Popcorn Time, not only can a victim pay a ransom to get their files back, but they can also try to infect two other people and have them pay the ransom in order to get a free key," writes Bleeping Computer. Infected victims are given a "referral code" and, if two people are infected by that code and pay up -- the original victim is given their decryption key (potentially). While encrypting your files, Popcorn Time displays a fake system screen that says "Downloading and installing. Please wait" -- followed by a seven-day countdown clock for the amount of time left to pay its ransom of one bitcoin. That screen claims that the perpetrators are "a group of computer science students from Syria," and that "all the money that we get goes to food, medicine, shelter to our people. We are extremely sorry that we are forcing you to pay but that's the only way that we can keep living." So what would you do if this ransomware infected your files?
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

With Wine Git, You Can Run The D3D11 Blizzard Overwatch Game On Linux

Wine has long been working on its Direct3D 11 support, but it's not quite ready for major Windows games with the upcoming Wine 2.0 release. With some work that didn't make the cut for Wine 2.0, Blizzard's Overwatch game appears to be running well...

Read the full article here by Phoronix

Le projet Cobalt de Microsoft et Qualcomm fait tourner Photoshop sur un ARM64 (MàJ)


Mise à jour du 12 décembre à 11:45 : Grâce à Alexandre L. (lecteur pas si anonyme que ça) et Neowin, on apprend que l'utilisation d'un Snapdragon 820 est réservée au développement applicatif et qu'il faudra en passer par un Snapdragon 835 une fois la chose disponible en grand public. Pas de rétrocompatibilité à prévoir donc et il faudra passer à la caisse et récupérer du nouveau matos pour s'offrir les joies de Windows 10 32-bit sur une plateforme ARM.


Microsoft et Qualcomm ont annoncé conjointement leur collaboration pour faire tourner Windows 10 sur processeurs ARM. La belle affaire, on a déjà eu Windows RT, et la possibilité de faire tourner les fameuses UWP (les quelques applications universelles publiées dans le Microsoft Store). Stop, là c’est différent, il s’agit d’assurer une compatibilité avec les applications win32 via émulation, toutes les applications 32 bits, sans modification ni recompilation.

 

Les premières informations sur le projet «Cobalt» avaient été rendues publiques le mois dernier, mais peu de choses avaient filtré sinon une vague disponibilité pour la Redstone 3 update, la révision de Windows 10 qui fera suite à la Creators Update attendue pour mars 2017. Aujourd’hui on est passé au concret, la conférence Microsoft WinHEC qui s'est tenue à Shenzhen en Chine a été l’occasion pour Microsoft et Qualcomm de montrer les capacités de leur solution en faisant tourner, entre autres, Adobe Photoshop sur une plate-forme mobile dotée d’un processeur ARM64 Snapdragon 820 et pour Microsoft de déclarer :

 

Pour la première fois, nos clients seront en mesure de faire l'expérience du Windows qu'ils connaissent avec toutes les applications, les périphériques et les capacités d'entreprise dont ils ont besoin, sur une véritable machine mobile, efficace et toujours connectée.

Le pire, même si on ne s’attend évidement pas à pouvoir faire tourner World of Warcraft sur un tel matériel, c’est que ça à l’air de tenir la route, les applications semblent fluides et ne mettent pas 30 ans à se lancer. Nous ne disposons pas de détails techniques concernant cette émulation, mais vu les performances, elle fait assurément appel à des composantes logicielles et matérielles. Les premiers processeurs à bénéficier de cette capacité seront les Snapdragon 820 de Qualcomm et pour l'instant rien ne dit que d'autres fabricants pourraient y avoir droit ni même si de plus petits Snapdragon seront concernés. Si tel était le cas, est-ce que les performances suivraient ? Pas sûr, mais le projet commence seulement à se dévoiler et Microsoft et Qualcomm vont logiquement, et avant tout, montrer ce qu'ils ont de plus performant sous le coude.

 



Read the full article here by Le comptoir du hardware

Sunday, 11 December 2016

Mixed reality comes to your iPhone thanks to the Bridge headset

There's something more than a little magical about seeing the world in front of you being devastated by dragons or augmented with arrows pointing you to your next meeting. Alas, while mixing realities like that with our smartphones is already possible, the tech still is a long way off from reaching its potential — just look at early, disappointing efforts like Lenovo's enormous Tango phone. Luckily, startups are chasing the mixed reality dream too, including one — Occipital — that has a solid track record of solving the tricky problems that pop up when blurring boundaries between worlds. That's why the team's new mixed reality, the Bridge, seems so impressive right out of the gate.

Oh, and another thing: it's specifically for iPhones. For years now, most mobile virtual reality fun has been confined to Android, with cheap Gear VRs and Daydream Views making it easy to see what all the hype was about. While some VR games and apps exist for iPhones, Apple hardware historically hasn't gotten the same kind of developer love as Android has. To Occipital, that smelled like an opportunity. The Bridge will go one sale to the masses for $399 starting in March, but developers and the adventurous can snag their Explorer Editions as soon as next week. To understand what you'll actually get for your money, we'll have to rewind a bit.

Three years ago, the company released the Structure sensor, a fascinating bit of depth-sensing tech that was originally meant to bring augmented reality experiences to the iPad. Mixed reality still seemed like a hard sell back then, but there no denying the sensor's ability to measure the world around it was the real deal. To hear Occipital marketing chief Adam Rodnitzky tell it, the sensor eventually started being used by real estate agents, interior decorators and doctors, and after three years, the Structure was still excellent at its job.

So, with headsets being hawked alongside smartphones all over the place, Occipital decided to make their own — they took a Structure sensor, slapped a five-element wide-angle lens in front of it, and built a sturdy, balanced frame around it. Turning an existing product like the Structure into headset might seem like opportunism at its finest, but the end result has so much potential it almost doesn't matter.

I played with one of the Explorer Editions recently, and it was more impressive — and elaborate — than I expected. You can pop an iPhone 6 (or newer, but no SEs) into the frame and a magnetically latched door keeps it in place. From there, you place the Bridge on your head as you would a crown, and use a dial in the back to tighten it. Yes, it sounds like a sort of torture device, but the system actually works like a charm. The only real problem I came across was that the lenses sit closer to your eyes than in most other mobile VR headsets — that meant they pushed right up against my glasses most of the time. It could've been worse, but Rodnitzky assured me future models wouldn't smash my frames so noticeably.

Actually using Bridge was a much smoother experience. Occipital doesn't have any launch titles planned for the Bridge's debut, but it does come with a demo app that stars at adorable robot pet of sorts named Bridget. With the help of a Wiimote-like Bluetooth controller, I spent a good ten minutes tossing a virtual ball around the office and watching Bridget loop around coffee tables to retrieve it. Her understanding of the world around her was fueled by a depth-scanning session that only lasted a few seconds — once that was done, I had a mapped out a corner of our office with a level of precision that Lenovo's Phab 2 Pro wasn't able to match.

That might not be the fairest comparison to make, though: for now, the Structure sensor's software is only tuned to capture spaces of about 10 ft. by 10 ft., while Tango software usually tries to record whole swaths of a room at once. Structure's scope might be more limited, but it does a much better job within those constraints.

After dropping that ball one time too many, Bridget was tired and needed to charge. The answer? To grab her power cord and connect it to something that lit up, like a lamp. This is what I so sorely missed when I played with Tango — I wanted to badly for someone standing next to a virtual dinosaur to be able to interact with it or to pluck a virtual domino off the ground. This was a pretty basic example, but the sort of object recognition the Structure can pull off was unexpectedly good for a headset.

Don't think the Bridge is only capable of the usual augmented reality tricks, either: at one point, I was directed to drop a portal on the ground in front of me. Once I stepped into it, I found myself walking around inside a space station with a planet hanging lazily in the dark outside a hatch. A red mesh enveloped real-world obstacles, allowing me to dodge coffee tables and loungers as I (all too briefly) explored the station. After a few more moments of stumbling, that was that — demo over. I was just a little crushed.

With any luck, Occipital gets the sort of support from developers it's been gunning for. The Bridge system isn't perfect for a whole host of reasons, like the iPhone's non-AMOLED display and the potentially big hit on the phone's battery, but even the unfinished demo software was almost enough to make me toss the Phab 2 Pro in a desk drawer. The right kind of love could turn the Bridge into a must-have down the road — for now, I'll just have to wait and hope.



Read the full article here by Engadget

Thursday, 8 December 2016

Cancer consumes fat to feed its metastatic march in mice

Stopping cancer in its tracks may be easier than we thought and possible without debilitating rounds of chemo and radiation therapies. All we have to do is take away its energy source. Researchers at the Institute for Research in Biomedicine at the Barcelona Institute of Science and Technology in Spain have discovered that a cancerous cell's metastasization -- the process through which the disease spreads throughout the body -- relies heavily on the presence of a single molecule which allows the cells to absorb lipid fats from their environment.

See, metastasizing is an extremely energy-intensive venture. Most cells are biologically programmed to self-destruct if they break free from their surrounding tissue, in fact. However, for a cancer cell to spread not only does it have to survive the trip it must immediately adapt to the new tissue, adjust its protein expression to the new environment and begin propagating before it's attacked by the immune system. In the December edition of the journal Nature, the Barcelona team reports that it discovered this process runs on fats. And to absorb that energy source, the cancer cell uses a molecule called CD36.

The team found that when antibodies blocked the CD36, the cancer cells were unable to absorb lipids and therefore unable to metastasize. Unfortunately, it didn't do anything to slow the original tumor's growth but being able to keep it from spreading is still a big deal. And, going back through their medical literature, the Barcelona team found a correlation between the strength of CD36 expression in a patient and a worse medical outcome. The findings held true for bladder, lung and breast cancers in people as well.

The team is currently developing antibodies that will inhibit the expression of CD36, though they expect their work to take at least four more years before it will be ready for human trials. But their initial findings are promising. The prototype antibodies completely destroyed 15 percent of the metastasized cells they came in contact with and shrank the remaining tumors by at least 80 percent.

That said, if you have been diagnosed with cancer, don't do anything drastic like unilaterally switch to a low-fat diet. You're not a doctor and these guys haven't confirmed anything in humans yet. Always follow the direction of your oncologist.

Source: Nature



Read the full article here by Engadget

Microsoft and Qualcomm Collaborate to Bring Windows 10 & x86 Emulation to Snapdragon Processors

Today at Microsoft’s WinHEC event in Shenzhen, China, the company announced that it’s working with Qualcomm to bring the full Windows 10 experience to future devices powered by Snapdragon processors. Terry Myerson, executive vice president of the Windows and Devices Group at Microsoft, is “excited to bring Windows 10 to the ARM ecosystem” and looks forward to bringing “Windows 10 to life with a range of thin, light, power-efficient and always-connected devices,” which may include anything from smartphones to tablets to ultraportable laptops to servers. These new Snapdragon-powered devices should support all things Microsoft, including Microsoft Office, Windows Hello, Windows Pen, and the Edge browser, alongside third-party Universal Windows Platform (UWP) apps and, most interestingly, x86 (32-bit) Win32 apps. They should even be able to play Crysis 2.

This announcement fits nicely with Microsoft’s “Windows Everywhere” doctrine and should come as no surprise. It’s not even the first time we’ve seen Windows running on ARM processors. Microsoft’s failed Windows RT operating system was a modified version of Windows 8 that targeted the ARMv7-A 32-bit architecture. It grew from Microsoft’s MinWin effort to make Windows more modular by reorganizing the operating system and cleaning up API dependencies.

This work first surfaced in Windows Server 2008, which could be installed with a stripped-down, command-line only interface that did not include components such as Internet Explorer that were not necessary for specific server roles. Windows RT also leveraged the newer Windows Runtime (WinRT) API that offered several new features such as digitally signed app packages distributed through the centralized Windows Store and the ability to run apps within a sandbox. It also made it easier for software developers to target multiple CPU architectures. However, Microsoft’s rework of Windows was not yet complete, leaving Windows RT with a bunch of legacy Win32 code that went unused. It also could not run Win32 desktop apps, severely limiting the number of available apps to only those using WinRT and distributed through the Windows Store.

MinWin and its derivatives have continued to evolve over the past few years after getting a major boost in 2013 when Microsoft reorganized its disparate software platforms into the singular Operating Systems Engineering Group. The end result is Windows 10, a modular OS that can run on anything from low-powered IoT devices to high-performing workstations and servers. Its foundation is OneCore, MinWin’s direct descendant, that includes only the operating system kernel and components essential for any hardware platform. OneCore UAP (Universal App Platform) is another major module for Windows 10 whose groundwork was laid during the creation of Windows Phone and Windows RT. It provides support for Universal Windows Apps and Drivers, along with more advanced features such as the Edge browser and DirectX. On top of these modules, Microsoft can add modules that target specific device families (desktop, mobile, Xbox, HoloLens, etc.) that provide specialized features and shells.

Also included in OneCore UAP is Universal Windows Platform (UWP). An extension of the WinRT API used in Windows 8, it allows developers to create universal apps that are CPU architecture agnostic and can run on multiple devices, seamlessly adapting their user interface and input methods to the hardware they’re running on. With UWP, the architecutre independence is achieved by having pre-compiled versions for each platform available from the Store, which will then download and install the correct version for the individual device. The major change with today's announcement over Windows RT and UWP is that x86 apps will be able to run on Qualcomm's ARM-based SoCs, along with support for all of the peripherals that are already supported with Windows 10. This alone is a huge change from Windows RT, which would only work with a small subset of peripherals.

Microsoft is also focusing on having these devices always connected through cellular, which is something that is not available for many PCs at the moment. Support will be available for eSIM to avoid having to find room in a cramped design to accomodate a physical SIM, and Microsoft is going so far as to call these "cellular PCs" meaning they are expecting broad support for this class of computer, rather than the handful available now with cellular connectivity.

The ability to run x86 Win32 apps on ARM will come through emulation, and to demonstrate the performance Microsoft has released a video of an ARM PC running Photoshop.

This of course raises several questions, few-if-any of which Microsoft is willing to answer. Intel has long exerted strong control over the x86 ISA, limiting or outright preventing competitors like NVIDIA from implementing x86 support. So how Microsoft and Qualcomm are able to (for lack of a better way to put it) get away with this is a big question. Certainly there's no indication right now that this has Intel's formal blessing.

The key points here are that this is a form of software emulation - Microsoft even calls it as much - and that only 32-bit x86 support is being offered. On the former, this means that there's no hardware execution of x86 instructions taking place - though Microsoft and Qualcomm are certainly lining up instructions as best they can - which avoids many of the obvious patent pitfalls of doing x86 in hardware, and puts it in the same category as other x86 emulation mechanisms like DOSBox and QEMU. Meanwhile only supporting 32-bit x86 code further rolls back the clock, as the most important of those instructions are by now quite old, x86 having made the jump to 64-bit x86-64 back in 2003. So it may very well be that it's easier to avoid any potential legal issues by sticking with 32-bit code, though that's supposition on our part. In any case it will be interesting to see what instructions Microsoft's emulator supports, and whether newer instructions and instruction set extensions (e.g SSE2) are supported in some fashion.

Of course, the performance of this solution remains to be seen. x86 is not easy or cheap to emulate, and an "emulator" as opposed to a Denver-like instruction translation makes that all the harder. On the other hand, while maximizing x86 compatibility is great for Microsoft and Qualcomm, what they really need x86 for is legacy applications, which broadly speaking aren't performance-critical. So while x86 on a phone/tablet ARM SoC may not be fast, it need only be "good enough."

In any case, Windows 10’s ability to scale and adapt to essentially any hardware platform is a remarkable feat of engineering, and it’s what makes today’s joint announcement with Qualcomm possible. The first devices with Snapdragon SoCs running the full Windows 10 experience should be available in the second half of 2017.

It will be interesting to see what shape these devices take and which companies produce them. Some new lower-cost, full-featured Windows 10 tablets would be a welcome addition, and Qualcomm has its eyes on the low-powered server market too with its Centriq product family. A Windows 10 smartphone with a Snapdragon SoC is also likely, but with Windows Phone 8 holding less than 1% global market share, according to Gartner, Microsoft is essentially starting from scratch. Will the benefits of universal apps be enough to lure software developers and users of other Windows products away from Android and iOS? Can Windows 10 reestablish Microsoft as a major player in the smartphone market, or is the hole it has dug over the past decade too deep?



Read the full article here by AnandTech Article Channel

FindLectures Is a Huge Repository of Free Digital Lectures

The internet is filled with free educational lectures, and many of those lectures are spread across a variety of platforms, from free university sites to YouTube. FindLectures attempts to provide a single place to search through them.

Read more...



Read the full article here by Lifehacker

Wednesday, 7 December 2016

Best Practices For Animated Progress Indicators

Pebble Is Dead, and Its Customers Are Completely Screwed

Pebble, one of the best-known smartwatch brands, is dead. Rumors about the company’s demise have been swirling since The Information reported a potential Fitbit buyout last week. Today in a post on Kickstarter, Pebble confirmed that Fitbit was acquiring “key Pebble assets” and that Pebble would “no longer be operating…

Read more...



Read the full article here by Lifehacker

Open-sourcing DeepMind Lab

Originally posted on DeepMind Blog

DeepMind's scientific mission is to push the boundaries of AI, developing systems that can learn to solve any complex problem without needing to be taught how. To achieve this, we work from the premise that AI needs to be general. Agents should operate across a wide range of tasks and be able to automatically adapt to changing circumstances. That is, they should not be pre-programmed, but rather, able to learn automatically from their raw inputs and reward signals from the environment. There are two parts to this research program: (1)  designing ever-more intelligent agents capable of more-and-more sophisticated cognitive skills, and (2) building increasingly complex environments where agents can be trained and evaluated.

The development of innovative agents goes hand in hand with the careful design and implementation of rationally selected, flexible and well-maintained environments. To that end, we at DeepMind have invested considerable effort toward building rich simulated environments to serve as  “laboratories” for AI research. Now we are open-sourcing our flagship platform,  DeepMind Lab, so the broader research community can make use of it.

DeepMind Lab is a fully 3D game-like platform tailored for agent-based AI research. It is observed from a first-person viewpoint, through the eyes of the simulated agent. Scenes are rendered with rich science fiction-style visuals. The available actions allow agents to look around and move in 3D. The agent’s “body” is a floating orb. It levitates and moves by activating thrusters opposite its desired direction of movement, and it has a camera that moves around the main sphere as a ball-in-socket joint tracking the rotational look actions. Example tasks include collecting fruit, navigating in mazes, traversing dangerous passages while avoiding falling off cliffs, bouncing through space using launch pads to move between platforms, playing laser tag, and quickly learning and remembering random procedurally generated environments. An illustration of how agents in DeepMind Lab perceive and interact with the world can be seen below:

At each moment in time, agents observe the world as an image, in pixels, rendered from their own first-person perspective. They also may receive a reward (or punishment!) signal. The agent can activate its thrusters to move in 3D and can also rotate its viewpoint along both horizontal and vertical axes.


Artificial general intelligence research in DeepMind Lab emphasizes navigation, memory, 3D vision from a first person viewpoint, motor control, planning, strategy, time, and fully autonomous agents that must learn for themselves what tasks to perform by exploring their environment. All these factors make learning difficult. Each are considered frontier research questions in their own right. Putting them all together in one platform, as we have, represents a significant new challenge for the field.


DeepMind Lab is highly customisable and extendable. New levels can be authored with off-the-shelf editor tools. In addition, DeepMind Lab includes an interface for programmatic level-creation. Levels can be customised with gameplay logic, item pickups, custom observations, level restarts, reward schemes, in-game messages and more. The interface can be used to create levels in which novel map layouts are generated on the fly while an agent trains. These features are useful in, for example, testing how an agent copes with unfamiliar environments. Users will be able to add custom levels to the platform via GitHub. The assets will be hosted on GitHub alongside all the code, maps and level scripts. Our hope is that the community will help us shape and develop the platform going forward.



DeepMind Lab has been used internally at DeepMind for some time (example). We believe it has already had a significant impact on our thinking concerning numerous aspects of intelligence, both natural and artificial. However, our efforts so far have only barely scratched the surface of what is possible in DeepMind Lab. There are opportunities for significant contributions still to be made in a number of mostly still untouched research domains now available through DeepMind Lab, such as navigation, memory and exploration.

As well as facilitating agent evaluation, there are compelling reasons to think that it may be fundamentally easier to develop intelligence in a 3D world, observed from a first-person viewpoint, like DeepMind Lab. After all, the only known examples of general-purpose intelligence in the natural world arose from a combination of evolution, development, and learning, grounded in physics and the sensory apparatus of animals. It is possible that a large fraction of animal and human intelligence is a direct consequence of the richness of our environment, and unlikely to arise without it. Consider the alternative: if you or I had grown up in a world that looked like Space Invaders or Pac-Man, it doesn’t seem likely we would have achieved much general intelligence!

Read the full paper here.

Access DeepMind's GitHub repository here.

By Charlie Beattie, Joel Leibo, Stig Petersen and Shane Legg, DeepMind Team




Read the full article here by Google Open Source Blog

Sunday, 4 December 2016

Google has developed Deep Learning Algorithm to detect Diabetic Eye Disease

New advances in Deep Learning help solving many important medical and healthcare problems.

Read the full article here by DailyTech Main News Feed

VM-Neutral Node.js API Unveiled, As NodeSource Collaborates With Microsoft, Mozilla, Intel and IBM

An anonymous reader writes: This week saw the first proof of concept for Node.js API (or NAPI for short), "making module maintainers' lives easier by defining a stable module API that is independent from changes in [Google's JavaScript engine] V8 and allowing modules to run against newer versions of Node.js without recompilation." Their announcement cites both the efforts of the Node.js API working group and of ChakraCore, the core part of the Chakra Javascript engine that powers Microsoft Edge. And there was also a second announcement -- that the Node.js build system "will start producing nightly node-chakracore builds, enabling Node.js to be used with the ChakraCore JavaScript engine. "These initial efforts are stepping stones to make Node.js VM-neutral, which would allow more opportunities for Node.js in IoT and mobile use cases as well as a variety of different systems." One IBM runtime developer called it "a concrete step toward the strategic end goal of VM neutrality," and the Node.js Foundation believes that the API will ultimately result in "more modules to choose from, and more stability with modules without the need to continually upgrade."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Saturday, 3 December 2016

Amazon's new Snowmobile collects data instead of blasting through powder

Moving large amounts of electronically stored data is both difficult and time consuming. Current internet technology limits how quickly data can be transferred using the internet. While a few pictures on your phone can be uploaded in seconds, backing up all the data for an entire corporation can take years. This is a problem Amazon Web Services, the cloud-based computing arm of the online retailer, is currently dealing with. They now have a solution: the Snowmobile, literally a massive mobile server.
The Snowmobile is an expansion of Amazon's Snowball service. Amazon ships an 80 terabyte Snowball storage unit to customers. Once they uploaded their data, the customer would ship the Snowball back to Amazon so the data could be directly transferred to a server. Amazon designed the Snowball to deal with high network costs, long transfer times, and security concerns at come with moving large amounts of data over the internet.

For some customers, the Snowball simply didn't provide enough space. This led to the development of the Snowmobile, a semi-truck with 100 petabytes (or 100 million gigabytes) of storage that Amazon will use to directly transfer and move digital data from a customer to one of their data storage facilities. Essentially, they send customers a truck-sized hard drive that will physically move all of your stored data. Although it may sound strange to use a truck to move digital data, moving 100 petabytes over fiber internet could take more than 20 years. According to Wired, a Snowmobile can do a full New York to Los Angeles trip in 10 days, including the data transfer time.

Pricing is based on how much data is stored on the Snowmobile per month. Amazon is charging $0.005 per gigabyte, which is relatively inexpensive. Do the math and you'll find that a fully-loaded truck will set you back $500,000. Amazon obviously believes there will be enough demand to invest in the Snowmobile.

Amazon Web Services has storage facilities worldwide, with several scattered throughout the United States. This means the turnaround time for a Snowmobile can be days, not years. For Amazon's part, they're taking Snowmobile security and encryption very seriously. The company will provide dedicated security personnel, GPS tracking, alarm monitoring, and 24/7 video surveillance. Customers have the option of a security escort vehicle to accompany the Snowmobile while in transit. The truck itself will be tamper-resistant, water-resistant and temperature controlled.

Related Video:

Via: Wired

Source: Amazon



Read the full article here by Engadget

Death Stranding: Playing Tricks With Players

Hideo Kojima had a very busy Game Awards 2016 ceremony. He got an award as a game development icon and also shared the next trailer of his mysterious game Death Stranding. As with the debut video, you can’t really figure out what’s going on here, but we do see some interesting stuff there. But first, have a look for yourself. My god, this is something.

The opening shot shows a bunch of dead crabs, all female and swollen with eggs. It’s kind of understandable that we are standing somewhere near the shoreline. However, as the shot progresses we see a man walk right out of the imaginary see. Before him, we see a dark tunnel and a ruined city.

2016-12-02_11-48-59

2016-12-02_11-59-47

death-stranding-guillermo-del-toro

Guillermo del To is a big friend of Hideo Kojima.

The man turns around to watch the planes flying above him and we see that it’s Guillermo del Toro – director of some of the spookiest feature films. Attentive users on Reddit noticed that he actually has a scar on his forehead. Seems like someone was doing smth with his brain. He’s wearing a suit and some unrecognizable badge with the name Bridges and a web spreading along US (it actually says United Cities of America).

1466047981-death-stranding-2

DelToro is actually not mentioned in the credits of the trailer. Instead, Norman Reedus is stated there. This led some fans to believe that this trailer takes place in the alternative universe, where DelToro is Reedus. Whooooah.

1466197148-death-stranding-norma-reedus-mocap-session-1

1466197148-death-stranding-norma-reedus-mocap-session-2

The man hides inside the dark passage under the bridge and then some strange stuff starts to happen. We see a huge tank (M36 Tank Destroyer, an American tank destroyer used in 1944 and 1945), sort of infested with some strange organic substance. The tank looks like it was made by zombies. There are pieces of gore on its armore plates, there are bones, which substitute ladders.

2016-12-02_12-02-58

2016-12-02_12-03-22

Then we see zombies, who were US marines in the past. At least that’s my theory. They are marching somewhere, while the black goo is pouring down the cracks of the bridge. DelToro is obviously scared.

2016-12-02_12-05-01

The water is rising and he suddenly plugs a hose into the small container he’s carrying in his hands. Inside the container we see a child. Then we see a character of Mads Mikkelsen.

2016-12-02_12-19-33

Mads Mikkelsen is like the perfect bad guy.

death-stranding-mads-mikkelsen

Design-wise it’s absolute genius. You can’t figure out what’s going on, but there’s already tons of comments on Reddit and it’s really interesting to check out all those theories. What’s more interesting to us is that Death Stranding’s technical producer is Mark Cerny, who served as the architect of PlayStation 4 and the director of Knack (although we do wish to forget about this game). This means that Kojima has access to some of the most powerful technologies of Sony Computer Entertainment.

2016-12-02_12-18-56

Yesterday Michael Förtsch (a journalist from Germany, who works with Zeit, Spiegel and other credible papers) claimed, that he knows the name of the Death Stranding’s engine. It’s supposedly called Decima! The engine itself is a modified version of the technology used to build Infamous Second Son by Sucker Punch. Second Son is one of the most beautiful games on the platform, so we’re pretty sure that Kojima’s next game is going to have a lot of open spaces, urban landscapes and amazing action and nanomachines. This is just our forecast. After all, just like John Snow, we know absolutely nothing.

1466047981-death-stranding

1466047981-death-stranding-2

Death Stranding will be available for PS4 and PC.

 


© kikundra for 80lvl, 2016. | Permalink | No comment | Add to del.icio.us
Post tags: , , , , , , , , , , ,

Feed enhanced by Better Feed from Ozh



Read the full article here by 80lvl