Friday, 6 October 2017

Reasons Kubernetes is cool

When I first learned about Kubernetes (a year and a half ago?) I really didn’t understand why I should care about it.

I’ve been working full time with Kubernetes for 3 months or so and now have some thoughts about why I think it’s useful. (I’m still very far from being a Kubernetes expert!) Hopefully this will help a little in your journey to understand what even is going on with Kubernetes!

I will try to explain some reason I think Kubenetes is interesting without using the words “cloud native”, “orchestration”, “container”, or any Kubernetes-specific terminology :). I’m going to explain this mostly from the perspective of a kubernetes operator / infrastructure engineer, since my job right now is to set up Kubernetes and make it work well.

I’m not going to try to address the question of “should you use kubernetes for your production systems?” at all, that is a very complicated question. (not least because “in production” has totally different requirements depending on what you’re doing)

Kubernetes lets you run code in production without setting up new servers

The first pitch I got for Kubernetes was the following conversation with my partner Kamal:

Here’s an approximate transcript:

  • Kamal: With Kubernetes you can set up a new service with a single command
  • Julia: I don’t understand how that’s possible.
  • Kamal: Like, you just write 1 configuration file, apply it, and then you have a HTTP service running in production
  • Julia: But today I need to create new AWS instances, write a puppet manifest, set up service discovery, configure my load balancers, configure our deployment software, and make sure DNS is working, it takes at least 4 hours if nothing goes wrong.
  • Kamal: Yeah. With Kubernetes you don’t have to do any of that, you can set up a new HTTP service in 5 minutes and it’ll just automatically run. As long as you have spare capacity in your cluster it just works!
  • Julia: There must be a trap

There kind of is a trap, setting up a production Kubernetes cluster is (in my experience) is definitely not easy. (see Kubernetes The Hard Way for what’s involved to get started). But we’re not going to go into that right now!

So the first cool thing about Kubernetes is that it has the potential to make life way easier for developers who want to deploy new software into production. That’s cool, and it’s actually true, once you have a working Kubernetes cluster you really can set up a production HTTP service (“run 5 of this application, set up a load balancer, give it this DNS name, done”) with just one configuration file. It’s really fun to see.

Kubernetes gives you easy visibility & control of what code you have running in production

IMO you can’t understand Kubernetes without understanding etcd. So let’s talk about etcd!

Imagine that I asked you today “hey, tell me every application you have running in production, what host it’s running on, whether it’s healthy or not, and whether or not it has a DNS name attached to it”. I don’t know about you but I would need to go look in a bunch of different places to answer this question and it would take me quite a while to figure out. I definitely can’t query just one API.

In Kubernetes, all the state in your cluster – applications running (“pods”), nodes, DNS names, cron jobs, and more – is stored in a single database (etcd). Every Kubernetes component is stateless, and basically works by

  • Reading state from etcd (eg “the list of pods assigned to node 1”)
  • Making changes (eg “actually start running pod A on node 1”)
  • Updating the state in etcd (eg “set the state of pod A to ‘running’”)

This means that if you want to answer a question like “hey, how many nginx pods do I have running right now in that availabliity zone?” you can answer it by querying a single unified API (the Kubernetes API!). And you have exactly the same access to that API that every other Kubernetes component does.

This also means that you have easy control of everything running in Kubernetes. If you want to, say,

  • Implement a complicated custom rollout strategy for deployments (deploy 1 thing, wait 2 minutes, deploy 5 more, wait 3.7 minutes, etc)
  • Automatically start a new webserver every time a branch is pushed to github
  • Monitor all your running applications to make sure all of them have a reasonable cgroups memory limit

all you need to do is to write a program that talks to the Kubernetes API. (a “controller”)

Another very exciting thing about the Kubernetes API is that you’re not limited to just functionality that Kubernetes provides! If you decide that you have your own opinions about how your software should be deployed / created / monitored, then you can write code that uses the Kubernetes API to do it! It lets you do everything you need.

If every Kubernetes component dies, your code will still keep running

One thing I was originally promised (by various blog posts :)) about Kubernetes was “hey, if the Kubernetes apiserver and everything else dies, it’s ok, your code will just keep running”. I thought this sounded cool in theory but I wasn’t sure if it was actually true.

So far it seems to be actually true!

I’ve been through some etcd outages now, and what happens is

  1. All the code that was running keeps running
  2. Nothing new happens (you can’t deploy new code or make changes, cron jobs will stop working)
  3. When everything comes back, the cluster will catch up on whatever it missed

This does mean that if etcd goes down and one of your applications crashes or something, it can’t come back up until etcd returns.

Kubernetes’ design is pretty resilient to bugs

Like any piece of software, Kubernetes has bugs. For example right now in our cluster the controller manager has a memory leak, and the scheduler crashes pretty regularly. Bugs obviously aren’t good but so far I’ve found that Kubernetes’ design helps mitigate a lot of the bugs in its core components really well.

If you restart any component, what happens is:

  • It reads all its relevant state from etcd
  • It starts doing the necessary things it’s supposed to be doing based on that state (scheduling pods, garbage collecting completed pods, scheduling cronjobs, deploying daemonsets, whatever)

Because all the components don’t keep any state in memory, you can just restart them at any time and that can help mitigate a variety of bugs.

For example! Let’s say you have a memory leak in your controller manager. Because the controller manager is stateless, you can just periodically restart it every hour or something and feel confident that you won’t cause any consistency issues. Or we ran into a bug in the scheduler where it would sometimes just forget about pods and never schedule them. You can sort of mitigate this just by restarting the scheduler every 10 minutes. (we didn’t do that, we fixed the bug instead, but you could :) )

So I feel like I can trust Kubernetes’ design to help make sure the state in the cluster is consistent even when there are bugs in its core components. And in general I think the software is generally improving over time. The only stateful thing you have to operate is etcd

Not to harp on this “state” thing too much but – I think it’s cool that in Kubernetes the only thing you have to come up with backup/restore plans for is etcd (unless you use persistent volumes for your pods). I think it makes kubernetes operations a lot easier to think about.

Implementing new distributed systems on top of Kubernetes is relatively easy

Suppose you want to implement a distributed cron job scheduling system! Doing that from scratch is a ton of work. But implementing a distributed cron job scheduling system inside Kubernetes is much easier! (still not trivial, it’s still a distributed system)

The first time I read the code for the Kubernetes cronjob controller I was really delighted by how simple it was. Here, go read it! The main logic is like 400 lines of Go. Go ahead, read it! => cronjob_controller.go <=

Basically what the cronjob controller does is:

  • Every 10 seconds:
    • Lists all the cronjobs that exist
    • Checks if any of them need to run right now
    • If so, creates a new Job object to be scheduled & actually run by other Kubernetes controllers
    • Clean up finished jobs
    • Repeat

The Kubernetes model is pretty constrained (it has this pattern of resources are defined in etcd, controllers read those resources and update etcd), and I think having this relatively opinionated/constrained model makes it easier to develop your own distributed systems inside the Kubernetes framework.

Kamal introduced me to this idea of “Kubernetes is a good platform for writing your own distributed systems” instead of just “Kubernetes is a distributed system you can use” and I think it’s really interesting. He has a prototype of a system to run an HTTP service for every branch you push to github. It took him a weekend and is like 800 lines of Go, which I thought was impressive!

Kubernetes lets you do some amazing things (but isn’t easy)

I started out by saying “kubernetes lets you do these magical things, you can just spin up so much infrastructure with a single configuration file, it’s amazing”. And that’s true!

What I mean by “Kubernetes isn’t easy” is that Kubernetes has a lot of moving parts learning how to successfully operate a highly available Kubernetes cluster is a lot of work. Like I find that with a lot of the abstractions it gives me, I need to understand what is underneath those abstractions in order to debug issues and configure things properly. I love learning new things so this doesn’t make me angry or anything, I just think it’s important to know :)

One specific example of “I can’t just rely on the abstractions” that I’ve struggled with is that I needed to learn a LOT about how networking works on Linux to feel confident with setting up Kubernetes networking, way more than I’d ever had to learn about networking before. This was very fun but pretty time consuming. I might write more about what is hard/interesting about setting up Kubernetes networking at some point.

Or I wrote a 2000 word blog post about everything I had to learn about Kubernetes’ different options for certificate authorities to be able to set up my Kubernetes CAs successfully.

I think some of these managed Kubernetes systems like GKE (google’s kubernetes product) may be simpler since they make a lot of decisions for you but I haven’t tried any of them.



Read the full article here by Julia Evans

How to manage Linux containers with Ansible Container

How to manage Linux containers with Ansible Container

Oct 05, 2017, 10:00 (0 Talkback[s]) (Other stories by Opensource.com)

WEBINAR: On-demand Event

Replace Oracle with the NoSQL Engagement Database: Why and how leading companies are making the switch REGISTER >

Ansible Container addresses Dockerfile shortcomings and offers complete management for containerized projects.

Complete Story

Related Stories:



Read the full article here by Linux Today

Microsoft Edge coming to Android and iOS

Microsoft today addresses one of the biggest shortcomings of its Edge browser, announcing its upcoming availability on the two biggest mobile platforms: Android and iOS. Microsoft says that support for Android and iOS is "one of the most common requests" that it has received from Edge users on Windows 10. By making it available across multiple platforms, the software giant is making its browser a viable option in this space for folks who use a mix of devices. Edge is also available on Windows 10 Mobile, but the vast majority of mobile users are on Android and iOS. What's more, its biggest rivals… [Continue Reading]


Read the full article here by Betanews

Thursday, 5 October 2017

Google's instant-translation Pixel Buds are its most incredible release yet

There were numerous exciting releases announced by Google yesterday, but perhaps the most exciting -- certainly the most impressive -- was the Pixel Buds. Forget Apple's AirPods, Google's earphones have a mind-blowing trick up their sleeve. Designed (ideally) for use with the Pixel 2, the Google Pixel Buds can translate spoken words on the fly. Real-time translations -- the Babel fish made real and implanted in your ear -- is now a reality, allowing for conversations between two people speaking different languages. Google Pixel Buds are an extension of Google Translate, and can translate between 40 languages in an instant.… [Continue Reading]


Read the full article here by Betanews

Google's Pixel Buds translation will change the world

Google's Pixel 2 event in San Francisco on Wednesday had a lot of stuff to show off and most of it was more of the same: the next iteration of the flagship smartphone, new Home speakers and various ways of entwining them more deeply into your smart home, a new laptop that's basically a Yoga running ChromeOS and a body camera that I'm sure we've seen somewhere before. Yawn. We saw stuff like this last time and are sure to see more of it again at next year's event.

But tucked into the tail end of the presentation, Google quietly revealed that it had changed the world with a pair of wireless headphones. Not to be outdone by Apple's Air Pods and their wirelessly-charging TicTac storage case, Google packed its headphones with the power to translate between 40 languages, literally in real-time. The company has finally done what science fiction and countless Kickstarters have been promising us, but failing to deliver on, for years. This technology could fundamentally change how we communicate across the global community.

The Google Pixel Buds are wireless headphones designed for use with the company's new Pixel 2 handset. Once you've paired the phones to the handset, you can simply tap the right earpiece and issue a command to Google Assistant on the Pixel 2. You can have it play music, give you directions, place a phone call and whatnot, you know, all the standards.

But if you tell it to "Help me speak Japanese" and then start speaking in English, the phone's speakers will output your translated words as you speak them. The other party's reply (presumably in Japanese because otherwise what exactly are you playing at?) will then play into your ear through the Pixel Buds. As Google's onstage demonstration illustrated, there appeared to be virtually zero lag time during the translation, though we'll have to see how well that performance holds up in the real world with wonky WiFi connections, background noise and crosstalk.

This is a momentous breakthrough, to say the least. Just 20 years ago, if you wanted to have a passage of text translated using the internet rather than tracking down someone that actually spoke the language, you likely did it through Altavista's Babel Fish. Launched in 1997, it supported a dozen languages but often returned translations that were barely more intelligible than the text you put in. Over the next couple of decades, translation technology steadily improved but could never compete with natural language speakers for accuracy or speed.

In the last couple of years, we've seen some of the biggest names in technology jump into the translation space. In 2015 Skype debuted its Live Translation feature which works with four languages for spoken audio and 50 languages over IM. However the translations weren't really in real-time, there was a lag between when the original message was sent and when the translated version arrived.

Earlier this year, Microsoft debuted its PowerPoint "Presentation Translator" add-in. Using an iOS or Android app, Presentation Translator can convert your voice over into Spanish or Chinese in real-time. It will not, however, make your PowerPoint presentation any less of an ordeal to sit through, so keep those slides to a minimum.

Both of those programs are impressive in their own rights, however, they're a far cry from the hardware that Google has developed. Cramming all of the necessary bits and pieces necessary to facilitate real-time language translation into a device small enough to fit into your ear -- especially without the need for external computing power -- is no easy feat. That's not to say that people haven't tried (looking at you, Bragi Dash Pros).

The Pilot - Image: Waverly Labs

Take last year's Indiegogo project darling, the Pilot from Waverly Labs. Reportedly leveraging "speech recognition, machine translation and the advances of wearable technology" these paired devices would be split between the people conversing and inserted into the ear. When one person speaks, the other earpiece automatically translates those words. Or at least that's how it's supposed to work. The crowdfunding campaign closed last year and deliveries have yet to begin, though the company states that it will begin shipping units in Fall 2017.

But there's no need to do that now. Google didn't just beat Waverly Labs to the punch, Google knocked them down with 25 additional languages (40 to the Pilot's 15) and then stole their lunch money with a $160 pricetag -- $140 less than what Waverly wants for the Pilot.

But this isn't just about an industry titan curbstomping its startup competition, this technological advancement can, and likely will, have far reaching implications for the global community. It's as close as we can get to a Douglas Adams-esque Babel Fish without having to genetically engineer one ourselves. With these devices in circulation, the barriers of communications simply fall away.

You'll be able to walk up to nearly anybody in another country and be able to hold a fluid, natural language conversation without the need for pantomime and large hand gestures, or worry of offending with a mispronunciation. International commerce and communication could become as mundane as making a local phone call. The frictions of international diplomacy could be smoothed as well, ensuring that not only are a diplomats words faithfully translated but that a copy of the conversation is recorded as well.

Granted, this isn't some magic bullet that will single handedly bring about world peace and harmony among all peoples. You'll still have plenty of nonverbal and culturally insensitive means of putting your foot in your mouth but until we make like the Empire and develop Galactic Standard, Google's Pixel Buds are our new best bet for understanding one another.

Follow all the latest news from Google's Pixel 2 event here!



Read the full article here by Engadget

Wednesday, 4 October 2017

When is exclusion a valid design choice?

Chariot d'aide à la marche en bois Tidlo à 26.99€ @ Amazon

26.99€
62.00€ (-57%)
Livraison gratuite
Chariot d'aide à la marche et d'activités, jouet bois

Ce chariot a double usage, il peut aider bébé lors de premiers pas ou il se transforme en jeu éducatif en bois avec plusieurs activités. Boulier, hochet, parcours, cubes... Tout y est.
Un bac de rangement permettra de promener doudou et poupée dans toute la maison.

Un jouet en bois, à partir de 12 mois.

Dimensions: 34 cm x 33 cm x 50 cm de haut.


Read the full article here by Dealabs.com : Tous les deals HOT

Tuesday, 3 October 2017

NVIDIA Open-Sources "NVDLA", Open-Source Hardware Project

NVIDIA has announced a new open-source project: NVDLA...

Read the full article here by Phoronix

PlayStation boss Andrew House will leave Sony at the end of 2017

In 2011 Andrew House took over a troubled PlayStation brand from Kaz Hirai and has since restored its place as a leader in the gaming industry. Now, Sony has announced he's stepping down from his post as president and CEO of Sony Interactive Entertainment. SIE came to be in early 2016, spinning off everything PlayStation and Sony Network as a subsidiary of Sony based in California. According to a press release, he will be replaced by current deputy president John (Tsuyoshi) Kodera effective immediately but will remain in a new position as chairman through the end of the year to smooth the transition.

These days the PlayStation 4 family is on top of the sales charts, sports a game library stocked with desirable exclusives and is reaching into new technology like VR. But when House originally took the reins, he had a tough job ahead of him coming off of a disappointing run with the PS3 that included "$599 US dollars" and a hacking incident that took down PlayStation Network for more than a month. House has been with PlayStation since the beginning, working in Sony's marketing and communications department on the launch of the original console, eventually running the PlayStation business in Europe as of 2009.

Newest Innovations In Consumer Technology On Display At 2014 International CES

In a statement, the outgoing exec said: "PlayStation has been a huge part of my life for more than 20 years but with the business having achieved record-breaking success, now seemed to be the right time for me to pursue new challenges." A Sony spokeswoman told the Wall Street Journal that House "plans to spend more time with his family before embarking on a new career in the entertainment industry."

His now-former boss Hirai thanked him and said"

"When I passed the baton of leadership for Sony Computer Entertainment to Andrew House in 2011, I was confident that I was leaving the PlayStation business in the best possible hands, and so it has proved. I'm extremely grateful to Andy for the great contribution he has made to evolving the PlayStation business, and firmly positioning it as one of the drivers of our future growth."

The new CEO Kodera has been with Sony since 1992, and worked on services for the PlayStation Network since 2010, overseeing the launch of PlayStation Vue, PlayStation Now and PS VR. He said that "It's a great honor to take on the role of President and CEO, SIE, and I'm very grateful to Kaz and Andy for the trust and faith they have shown in me. I intend to build on the amazing progress Andy has made enhancing the PlayStation brand and expanding the game and network services business, and will strive to further strengthen the unique value proposition we are able to offer via the PlayStation platform. I look forward to working together with the SIE team, and everyone across the Sony Group, to continue to strengthen relations with our business partners, and provide the best possible interactive entertainment experiences, that only PlayStation can deliver."

Source: Sony



Read the full article here by Engadget

Friday, 29 September 2017

Germany is the biggest source of botnets in Europe

Just a year after the Mirai malware infected connected devices to create the first IoT botnet, new research from Norton shows that global botnets have continued to grow and spread as a result of unaware users inadvertently infecting others. According to Norton, the top three countries responsible for hosting the highest number of bots in Europe are Germany at just over eight percent, Italy at 10 percent and Russia at almost 14 percent. The UK was Europe's 11th highest source of bot infections -- which is down from 2015 when it was ranked seventh. In terms of specific cities, Madrid in Spain… [Continue Reading]


Read the full article here by Betanews

Wednesday, 27 September 2017

Kibana 5 internals - Dashboards, Visualizations and Index Patterns

Kibana is one of the Elastic products, part of the Elastic stack (formerly known as ELK).

Kibana is a an open source web app written with Angular and running over a thin Node.js server (which acts as a webserver).

Kibana is able to connect to a single Elasticsearch node.
It's possible to put a load balancer to several nodes, but the best practices suggest a Coordinating or a Tribe (deprecated) or a Cross-Cluster node (in case you need it).

Keep in mind: Kibana has no actual reason to exist without an Elasticsearch cluster to connect to.
The opposite is not true: Elasticsearch can live on its own and be quite useful without Kibana and can be even connected to Grafana if you really need visualizations.

Kibana has very little local state/persistence: the configuration files and some cache files. It completely relies on Elasticsearch for most of the features (e.g. authentication if you own X-Pack) and for storing all the data required to run.

Kibana provides a cool interface to build dashboards.

Kibana 5.0 dashboard screenshot from Elastic website
The dashboards are made up of several visualizations.

A visualization can be something static (a Markdown field to show some comment or help message) or dynamic (a pie chart, a table or a histogram).

The dynamic visualizations gets generated from the data retrieved from Elasticsearch.
The query to Elasticsearch is normally generated behind the scenes thanks to the UI. It's possible even to write your own query and use the output in the visualization.
The queries targeting Elasticsearch are most of the time aggregations.

A visualization typically refers to data fields exposed by an a index pattern.

What is an index pattern? Let's talk about index templates first.

If you are familiar with the Elastic stack, you've probably heard about index templates.
Index templates are one of the key parts of the Elasticsearch configuration.
An index template tells how the data you send into an index(or indices having an index name pattern) of an Elasticsearch cluster:

  • should be analyzed
    • if a text must be tokenized for free text search or taken as it is
    • if a float should be maybe mapped into a scaled float to save some space
    • if a field shouldn't be indexed at all
    • ...
  • should be stored and distributed
    • number of shards
    • number of replicas
    • how many times the index gets refreshed
    • if you want to keep the original document or not
    • ...

An index template is not able on its own to enumerate all the fields present in an index.
The only way to get them would be to perform a GET on an index, but you cannot define how Kibana should consider it.

Index patterns to the rescue!

An index pattern tells Kibana the fields you can query and what is their type.
Other advantages are:

  • it's able to target multiple indices (but you could do that with aliases on Elasticsearch)
  • you can define scripted fields

It's important to trigger a refresh of the index pattern if fields in the targeted index (or indices) have changed (added, removed or change of type occurred).

What happens if you want to export a single Dashboard?

You have several options:

  • the Elasticsearch API allows to backup all Kibana state, typically stored in the .kibana index
    • All dashboards, visualizations, index patterns and the dynamic configuration will be saved
  • Play with the new experimental Kibana import/export API (available since 5.5.0, it will be ready for prime time on 6.0)
  • Write a script in your preferred language

The latter option implies your tool:
  • gets the dashboard from the .kibana index, dashboard type by title
  • gets the panelsJSON field
  • unmarshalls the json data
  • gets all the visualization ids
  • exports all the visualization ids getting them from the .kibana index, visualization type
  • on each visualization
    • gets the kibanaSavedObjectMeta field
    • unmarshalls the json value
    • gets the index field within the query
  • get the index pattern from the .kibana index, index-pattern type
    • scripted fields are stored within an index-pattern

Unfortunately, there's no way to ensure this will not break in the near future.
Several changes will come with Elastic stack 6.0 and 7.0, such as:
  • Elasticsearch index mapping types will disappear
  • Kibana will have multi tenancy?
  • Internal storage changes on Kibana (can occurr even on minor versions), as the API is not public

Elastic has written a post on Kibana internals on 2016, worth taking a look.

There's no silver bullet solution to move your dashboard out from your cluster in a future-proof manner.

Hope you find this post useful!

Bye!

Tuesday, 26 September 2017

Password-theft 0day imperils users of High Sierra and earlier macOS versions

reader comments 1

There's a vulnerability in High Sierra and earlier versions of macOS that allows rogue applications to steal plaintext passwords stored in the Mac keychain, a security researcher said Monday. That's the same day the widely anticipated update was released.

The Mac keychain is a digital vault of sorts that stores passwords and cryptographic keys. Apple engineers have designed it so that installed applications can't access its contents without the user entering a master password. A weakness in the keychain, however, allows rogue apps to steal every plaintext password it stores with no password required. Patrick Wardle, a former National Security Agency hacker who now works for security firm Synack, posted a video demonstration here.

The video shows a Mac virtual machine running High Sierra as it installs an app. Once the app is installed, the video shows an attacker on a remote server running the Netcat networking utility. When the attacker clicks "exfil keychain" button, the app surreptitiously exfiltrates all the passwords stored in the keychain and uploads them to the server. The theft requires no user interaction beyond the initial installation of the rogue app, and neither the app nor macOS provides any warning or seeks permission.

An Apple representative e-mailed the following statement:

macOS is designed to be secure by default, and Gatekeeper warns users against installing unsigned apps, like the one shown in this proof of concept, and prevents them from launching the app without explicit approval. We encourage users to download software only from trusted sources like the Mac App Store and to pay careful attention to security dialogs that macOS presents.

Continually disappointed

By default, Gatekeeper prevents Mac users from installing apps unless they're digitally signed by developers. While the app in the video is unsigned—and as a result can't be installed on a default Mac installation—the vulnerability can be exploited by signed apps as well. All that's required to digitally sign an app is a membership in the Apple Developer Program, which costs $99 per year. Wardle reported the vulnerability to Apple last month and decided to make the disclosure public when the company released High Sierra without fixing it first.

"As a passionate Mac user, I'm continually disappointed in the security of macOS," Wardle told Ars. "I don't mean that to be taken personally by anybody at Apple—but every time I look at macOS the wrong way something falls over. I felt that users should be aware of the risks that are out there."

Wardle said Apple would be served well by implementing a bug bounty program for macOS. Last year, the company established a bounty program that pays as much as $200,000 for security bugs in iOS that runs iPhones and iPads. Apple has declined to pay researchers for private reports of security flaws in macOS. Earlier this month, Wardle published details of a second unfixed bug in High Sierra.



Read the full article here by Ars Technica

MongoDB’s Mongo Moment

As if files for an IPO, open source database vendor MongoDB has rich opportunities, yet also faces its share of challenge

Read the full article here by Datamation.com

Monday, 25 September 2017

ARM TrustZone Hacked By Abusing Power Management

"This is brilliant and terrifying in equal measure," writes the Morning Paper. Long-time Slashdot reader phantomfive writes: Many CPUs these days have DVFS (Dynamic Voltage and Frequency Scaling), which allows the CPU's clockspeed and voltage to vary dynamically depending on whether the CPU is idling or not. By turning the voltage up and down with one thread, researchers were able to flip bits in another thread. By flipping bits when the second thread was verifying the TrustZone key, the researchers were granted permission. If number 'A' is a product of two large prime numbers, you can flip a few bits in 'A' to get a number that is a product of many smaller numbers, and more easily factorable. "As the first work to show the security ramifications of energy management mechanisms," the researchers reported at Usenix, "we urge the community to re-examine these security-oblivious designs."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Sunday, 24 September 2017

Android's 'Check For Update' button now works and actually checks for OTA updates!

Updating Android can be a painful experience depending on the handset you have and the carrier you're with. You might hear that an update is available, but find nothing shows up when you hit the Check For Update button. Now this should be changing... and not just for Oreo users. Many Android users will have experienced the frustration of knowing full well that there is an update available for their device, but it's not offered up automatically. Even more annoyingly, it's often the case that even when performing a manual check, updates remain unavailable for download. Google has announced that:… [Continue Reading]


Read the full article here by Betanews

Saturday, 23 September 2017

Facebook Relents, Switches React, Flow, Immuable.js and Jest To MIT License

An anonymous reader quotes the Register: Faced with growing dissatisfaction about licensing requirements for some of its open-source projects, Facebook said it will move React, Jest, Flow, and Immutable.js under the MIT license next week. "We're relicensing these projects because React is the foundation of a broad ecosystem of open source software for the web, and we don't want to hold back forward progress for nontechnical reasons," said Facebook engineering director Adam Wolff in a blog post on Friday. Wolff said while Facebook continues to believe its BSD + Patents license has benefits, "we acknowledge that we failed to decisively convince this community"... Wolff said the updated licensing scheme will arrive next week with the launch of React 16, a rewrite of the library designed for more efficient operation at scale. Facebook was facing strong criticism from the Apache Software Foundation and last week Wordpress.com had announced plans to move away from React. "Wolff said Facebook considered a license change for its other open-source projects, but wasn't ready to commit to anything," the Register adds. "Some projects, he said, will keep the BSD + Patents license."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Thursday, 21 September 2017

Lumberyard: Building a Better Engine

Amazon decided to give a preview for the months ahead and share some features to be added to Lumberyard.

The focus for the next few releases is to make Lumberyard easier, more powerful, and more modular. The team is working hard to deliver new systems and features that align with these goals, and your feedback has played a crucial role in that process. 

A lot has changed since the engine was first launched: they’ve replaced over 60% of the original codebase, switching out older, redundant systems (e.g. CryEntity) for more modern, performant ones (e.g. Component entity systems)—and this will continue to be the case. While some new systems are still in preview, the team is working to provide a stable foundation for users’ current and future games, so you can build and innovate confidently moving forward. You can also expect more detailed tutorials and documentation to support these systems in the months to come.

So what exactly are these new systems and features? Here’s a glimpse of what you can expect in the next couple of releases:

  • Script Canvas – Script Canvas, the new visual scripting language, will provide a high performance, flexible scripting language in a familiar node interface, so content creators can build gameplay with little to no programming experience. Built entirely from the ground up to be compatible with Lumberyard’s component entity and Behavior Context system, Script Canvas enables teams to easily use any combination of visual scripting, Lua, or C++ coding in their game project. Script Canvas will replace Flow Graph.
  • Brand new animation system – Siggraph attendees got a sneak peek at new, robust animation solution, which was built from technology used by well-known publishers such as EA, Ubisoft, among others. The goal here is simple: help animators build amazing characters in Lumberyard with little to no engineering help. This new system will replace the existing CryAnim functionality, including Geppetto and Mannequin, and include functionality such as a visual state machine, support for linear skinning, joint scaling, and much more.
  • CryEngine Legacy Deprecation – In addition to streamlining the editor experience, the team will soon hide the legacy functionality to better signal to customers which systems are forward-looking. This effort will help developers migrate from legacy entity systems to the new component entity system, and will include a data converter tool for developers still using some of the older CryEntity systems. A significant number of legacy system data will be auto-converted to the new component model in the next few releases—all in an effort to remove CryEntity systems from Lumberyard early next year. 
  • More Cloud Gems and AWS Integration – soon, you’ll see a Cloud Gem that helps capture in-game surveys from your players, as well as a gem that leverages the power of Amazon Lex and Amazon Polly to build speech recognition, text-to-speech, and conversational gameplay experiences. From there, the roadmap considers new gems that reduce or automate engineering effort to build common connected and social features (e.g. push notifications, metrics, monetization tools, dynamic content, etc.), accelerate production (e.g. asset processing), and enable new player experiences. 
  • Component Entity Workflows – they will continue to improve the component entity workflows, especially in the areas around usability and interoperability with the Entity Inspector, Entity Outliner, and viewport. These improvements also include better support for working on large scale levels with lots of content, improved entity organization and manipulation capabilities, and better slice manipulation and collaboration tools – working towards the eventual ability to open and edit a slice that is not part of a level. 
  • Location-independent Game Project – they plan on delivering Project Configurator changes and build system improvements that enable customers to store their game and gems in any arbitrary location. This has been a popular request from the community.
  • Mobile Performance and Optimization – they are also improving mobile workflows and performance. Their efforts will continue to improve frame rate on minimum spec devices (iPhone 5S+ and Android Nexus 5 and equivalents), improve battery life usage for applications, and reduce memory and app size footprint on iOS and Android (currently at ~25MB minimum size, but we’ll continue to work to make it smaller).
  • Memory Management and Thread Unification – they have two on-going initiatives underway to improve runtime performance (especially for PC and console platforms) as well as stability. First off, they will unify and optimize Lumberyard’s memory management. The teams are focused on identifying and resolving existing memory leaks while improving the memory usage patterns throughout the engine. Second, they also plan on migrating the engine’s various threading mechanisms to the AZCore job system, enabling further threading performance improvements and load balancing.
  • New Shader and Material System – the short term objectives are to improve the usability of the material and shader system by converting sub-materials into individual material assets, enabling the concept of a shared material library, and letting a developer author a new surface shader without compiling C++. Longer term, they’re planning a full refactor and modernization of the material and shader system, but they’re going to spend the time to get this right, and this work will go into next year.
  • macOS – the vision has always been to provide a complete, cross-platform experience, so they are also providing the tools necessary for developing Lumberyard games on macOS. This includes the Lumberyard editor and all its associated tools, the asset processor, the shader compiler and, of course, the renderer. Mac support has been a popular request from the customers, especially the ones building iOS games.

© a.sergeev for 80lvl, 2017. | Permalink | One comment | Add to del.icio.us
Post tags: , ,

Feed enhanced by Better Feed from Ozh



Read the full article here by 80lvl

Google is buying HTC's Pixel team for $1.1 billion

After weeks (months, and years) of speculation, HTC has announced that its "Powered by HTC" R&D division -- the team behind Google's Pixel and Pixel XL smartphones -- will be purchased by Google for $1.1 billion in cash. According to HTC's CFO Peter Shen, this will mean about half -- yes, half -- of the 4,000 people in his company's R&D team will be joining Google, but he emphasized that HTC will continue developing its own range of smartphones, including its next flagship product. The agreement also grants Google a non-exclusive license for a large part of HTC's intellectual property. The deal is expected to be approved and closed by early 2018.

Curious about what all of this means? You could do worse than to check out our guide to the subject from last week.

"This agreement is a brilliant next step in our longstanding partnership, enabling Google to supercharge their hardware business while ensuring continued innovation within our HTC smartphone and Vive virtual reality businesses," HTC co-founder and CEO Cher Wang said in a statement.

The rumor mill went into overdrive yesterday after HTC announced that trading of its shares on the Taiwan Stock Exchange would be halted today pending a "major announcement." The company swiftly added that, to debunk sale rumors, that it did not "comment on market rumor or speculation."

By then, however, most of everyone had assumed that the long-standing flirtation between the two companies would finally finish. Unsubstantiated reports on Twitter claimed that the deal would see HTC's manufacturing division become a part of the search engine, but the reality is that half of its R&D division will be joining Google instead. According to Google SVP of Hardware Rick Osterloh, the two parties have yet to set a new work location for these employees, but they will aim to bring minimal disruption to them. The remaining R&D team will focus on HTC's own smartphone brand as well as VR technology.

In return, Google "will continue to have access to HTC's IP to support the Pixel smartphone family," according to HTC's statement. Or in Osterloh's own words, it's "continuing our big bet on hardware," which is fitting given his involvement with Google's short-lived ownership of Motorola's smartphone business.

Much like the deal that cleaved Nokia's hardware business from its parent company, the HTC name and brand will live on but in both the smartphone and VR worlds.

Daniel Cooper contributed to this article.

Source: Google, HTC



Read the full article here by Engadget