Friday, 24 March 2017

OneDrive "Runs like a Dog" on Windows OS Rivals

Microsoft is being accused of perverting their OneDrive web app so it performs far more slowly on Linux, ChromeOS, and other Windows rivals. The supposed evidence is that users see an increase of performance once they change their browser's user-agent string to IE or Edge. Uh, I think that just means OneDrive's code for Firefox, Chrome, and other non-MS browsers is terrible—although that in itself is worthy of complaint, I guess. Plenty of Linux users are up in arms about the performance of the OneDrive web app. They say that when accessing Microsoft's cloudy storage system in a browser on a non-Windows system – such as on Linux or ChromeOS – the service grinds to a barely usable crawl. But when they use a Windows machine on the same internet connection, speedy access resumes. Crucially, when they change their browser's user-agent string – a snippet of text the browser sends to websites describing itself – to Internet Explorer or Edge, magically their OneDrive access speeds up to normal on their non-Windows PCs. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Wednesday, 22 March 2017

From XP to 10, DoubleAgent pwns all your Windows?


The Cybellum team published a new 0-day technique for injecting code and maintaining persistency on a target computer, baptized DoubleAgent. This technique uses a feature that all Windows versions since XP provide, that allows for an Application Verifier Provider DLL to be installed for any executable. The verifier-provider DLL is just a DLL that is loaded into the process and is supposedly responsible for performing run-time verifications for the application. However, its internal behaviour can be whatever an attacker wants, since he can provide the DLL himself.

Microsoft describes it as:

Application Verifier is a runtime verification tool for unmanaged code. Application Verifier assists developers in quickly finding subtle programming errors that can be extremely difficult to identify with normal application testing. Using Application Verifier in Visual Studio makes it easier to create reliable applications by identifying errors caused by heap corruption, incorrect handle and critical section usage. (…)

The code injection occurs extremely early during the victim’s process initialization, giving the attacker full control over the process and no way for the process to actually detect what’s going on. Once a DLL has been registered as a verifier provider DLL for a process, it would permanently be injected by the Windows Loader into the process every time the process starts, even after reboots, updates, reinstalls, or patches.

So it’s all over for Windows right? Well… no. The thing is, to register this DLL, the registered process has to have administrator rights so it can write the proper key to the Windows Registry. Without these permissions, there is no way for this attack to work. You know, the kind of permissions that allow you to install software for all users or format your own hard-drive. So, although this technique has its merit and can present challenges to processes that absolutely must maintain their integrity (such as the Cybellum team points out in the Anti-Virus software case), some other security flaw had to occur first so you can register this sort of ‘debugging DLL’.

If you already have administrator permissions you can do pretty much what you want, including DLL injection to fool anti-virus software. (Though it might be easy just to disable or remove it.)  This new tool has the advantage of being stealthy, but is a 0-day that requires root a 0-day?

[via The Hacker News]



Read the full article here by Hack a Day

Nintendo approached Cyanogen for the Switch's OS

In the early life of the Nintendo Switch, when it was still codenamed Nintendo NX, there were a lot of rumors floating around about the device. We saw a console with an oval shape and a screen that seemed built into the buttons and rumors that the new device would run Android as its operating system. While the product we have today resembles nothing of those early prototypes, it looks like the Android rumor may not have been far off. Cyanogen's Kirt McMaster tweeted early this morning to say that Nintendo had approached him about designing a custom Android-based operating system for their new console, but he had some choice words for the company. Add this to the list of terrible business decisions by Cyanogen and its CEO.

Read the full article here by OSNews

Performance Bugs, 'the Dark Matter of Programming Bugs', Are Out There Lurking and Unseen

Several Slashdot readers have shared an article by programmer Nicholas Chapman, who talks about a class of bugs that he calls "performance bugs". From the article: A performance bug is when the code computes the correct result, but runs slower than it should due to a programming mistake. The nefarious thing about performance bugs is that the user may never know they are there -- the program appears to work correctly, carrying out the correct operations, showing the right thing on the screen or printing the right text. It just does it a bit more slowly than it should have. It takes an experienced programmer, with a reasonably accurate mental model of the problem and the correct solution, to know how fast the operation should have been performed, and hence if the program is running slower than it should be. I started documenting a few of the performance bugs I came across a few months ago, for example (on some platforms) the insert method of std::map is roughly 7 times slower than it should be, std::map::count() is about twice as slow as it should be, std::map::find() is 15% slower than it should be, aligned malloc is a lot slower than it should be in VS2015.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

An Upgrade to SyntaxNet, New Models and a Parsing Competition

At Google, we continuously improve the language understanding capabilities used in applications ranging from generation of email responses to translation. Last summer, we open-sourced SyntaxNet, a neural-network framework for analyzing and understanding the grammatical structure of sentences. Included in our release was Parsey McParseface, a state-of-the-art model that we had trained for analyzing English, followed quickly by a collection of pre-trained models for 40 additional languages, which we dubbed Parsey's Cousins. While we were excited to share our research and to provide these resources to the broader community, building machine learning systems that work well for languages other than English remains an ongoing challenge. We are excited to announce a few new research resources, available now, that address this problem.

SyntaxNet Upgrade
We are releasing a major upgrade to SyntaxNet. This upgrade incorporates nearly a year’s worth of our research on multilingual language understanding, and is available to anyone interested in building systems for processing and understanding text. At the core of the upgrade is a new technology that enables learning of richly layered representations of input sentences. More specifically, the upgrade extends TensorFlow to allow joint modeling of multiple levels of linguistic structure, and to allow neural-network architectures to be created dynamically during processing of a sentence or document.

Our upgrade makes it, for example, easy to build character-based models that learn to compose individual characters into words (e.g. ‘c-a-t’ spells ‘cat’). By doing so, the models can learn that words can be related to each other because they share common parts (e.g. ‘cats’ is the plural of ‘cat’ and shares the same stem; ‘wildcat’ is a type of ‘cat’). Parsey and Parsey’s Cousins, on the other hand, operated over sequences of words. As a result, they were forced to memorize words seen during training and relied mostly on the context to determine the grammatical function of previously unseen words.

As an example, consider the following (meaningless but grammatically correct) sentence:
This sentence was originally coined by Andrew Ingraham who explained: “You do not know what this means; nor do I. But if we assume that it is English, we know that the doshes are distimmed by the gostak. We know too that one distimmer of doshes is a gostak." Systematic patterns in morphology and syntax allow us to guess the grammatical function of words even when they are completely novel: we understand that ‘doshes’ is the plural of the noun ‘dosh’ (similar to the ‘cats’ example above) or that ‘distim’ is the third person singular of the verb distim. Based on this analysis we can then derive the overall structure of this sentence even though we have never seen the words before.

ParseySaurus
To showcase the new capabilities provided by our upgrade to SyntaxNet, we are releasing a set of new pretrained models called ParseySaurus. These models use the character-based input representation mentioned above and are thus much better at predicting the meaning of new words based both on their spelling and how they are used in context. The ParseySaurus models are far more accurate than Parsey’s Cousins (reducing errors by as much as 25%), particularly for morphologically-rich languages like Russian, or agglutinative languages like Turkish and Hungarian. In those languages there can be dozens of forms for each word and many of these forms might never be observed during training - even in a very large corpus.

Consider the following fictitious Russian sentence, where again the stems are meaningless, but the suffixes define an unambiguous interpretation of the sentence structure:
Even though our Russian ParseySaurus model has never seen these words, it can correctly analyze the sentence by inspecting the character sequences which constitute each word. In doing so, the system can determine many properties of the words (notice how many more morphological features there are here than in the English example). To see the sentence as ParseySaurus does, here is a visualization of how the model analyzes this sentence:
Each square represents one node in the neural network graph, and lines show the connections between them. The left-side “tail” of the graph shows the model consuming the input as one long string of characters. These are intermittently passed to the right side, where the rich web of connections shows the model composing words into phrases and producing a syntactic parse. Check out the full-size rendering here.

A Competition
You might be wondering whether character-based modeling are all we need or whether there are other techniques that might be important. SyntaxNet has lots more to offer, like beam search and different training objectives, but there are of course also many other possibilities. To find out what works well in practice we are helping co-organize, together with Charles University and other colleagues, a multilingual parsing competition at this year’s Conference on Computational Natural Language Learning (CoNLL) with the goal of building syntactic parsing systems that work well in real-world settings and for 45 different languages.

The competition is made possible by the Universal Dependencies (UD) initiative, whose goal is to develop cross-linguistically consistent treebanks. Because machine learned models can only be as good as the data that they have access to, we have been contributing data to UD since 2013. For the competition, we partnered with UD and DFKI to build a new multilingual evaluation set consisting of 1000 sentences that have been translated into 20+ different languages and annotated by linguists with parse trees. This evaluation set is the first of its kind (in the past, each language had its own independent evaluation set) and will enable more consistent cross-lingual comparisons. Because the sentences have the same meaning and have been annotated according to the same guidelines, we will be able to get closer to answering the question of which languages might be harder to parse.

We hope that the upgraded SyntaxNet framework and our the pre-trained ParseySaurus models will inspire researchers to participate in the competition. We have additionally created a tutorial showing how to load a Docker image and train models on the Google Cloud Platform, to facilitate participation by smaller teams with limited resources. So, if you have an idea for making your own models with the SyntaxNet framework, sign up to compete! We believe that the configurations that we are releasing are a good place to start, but we look forward to seeing how participants will be able to extend and improve these models or perhaps create better ones!

Thanks to everyone involved who made this competition happen, including our collaborators at UD-Pipe, who provide another baseline implementation to make it easy to enter the competition. Happy parsing from the main developers, Chris Alberti, Daniel Andor, Ivan Bogatyy, Mark Omernick, Zora Tung and Ji Ma!

By David Weiss and Slav Petrov, Research Scientists


Read the full article here by Google Open Source Blog

Intel’s Bold Plan to Reinvent Computer Memory (and Keep It a Secret)

Intel’s Bold Plan to Reinvent Computer Memory (and Keep It a Secret)
Intel just unleashed a new kind of computer memory it believes will change the way the world builds computers. But it won't tell the world what's inside. The post Intel’s Bold Plan to Reinvent Computer Memory (and Keep It a Secret) appeared first on WIRED.

Read the full article here by Wired Top Stories

Tuesday, 21 March 2017

John Goodenough's Colleagues Are Skeptical of His New Battery Technology

Earlier this month, a research team led by John Goodenough announced that they had created a new fast charging solid-state battery that can operate in extreme temperatures and store five to ten times as much energy as current standard lithium-ion batteries. The announcement was big enough to have Google's Eric Schmidt tweeting about it. However, there are some skeptics, including other leading battery researchers. "For his invention to work as described, they say, it would probably have to abandon the laws of thermodynamics, which say perpetual motion is not possible," reports Quartz. "The law has been a fundamental of batteries for more than a century and a half." Quartz reports: Goodenough's long career has defined the modern battery industry. Researchers assume that his measurements are exact. But no one outside of Goodenough's own group appears to understand his new concept. The battery community is loath to openly challenge the paper, but some come close. "If anyone but Goodenough published this, I would be, well, it's hard to find a polite word," Daniel Steingart, a professor at Princeton, told Quartz. Goodenough did not respond to emails. But in a statement released by the University of Texas, where he holds an engineering chair, he said, "We believe our discovery solves many of the problems that are inherent in today's batteries. Cost, safety, energy density, rates of charge and discharge and cycle life are critical for battery-driven cars to be more widely adopted." In addition, Helena Braga, the paper's lead author, in an exchange of emails, insisted that the team's claims are valid. For almost four decades, Goodenough has dominated the world of advanced batteries. If anyone could finally make the breakthrough that allows for cheap, stored electricity in cars and on the grid, it would figure to be him. Goodenough invented the heart of the battery that is all but certainly powering the device on which you are reading this. It's the lithium-cobalt-oxide cathode, invented in 1980 and introduced for sale by Sony in 1991. Again and again, Goodenough's lab has emerged with dramatic discoveries confirming his genius. It's what is not stated in the paper that has some of the battery community stumped. How is Goodenough's new invention storing any energy at all? The known rules of physics state that, to derive energy, differing material must produce differing eletro-chemical reactions in the two opposing electrodes. That difference produces voltage, allowing energy to be stored. But Goodenough's battery has pure metallic lithium or sodium on both sides. Therefore, the voltage should be zero, with no energy produced, battery researchers told Quartz. Goodenough reports energy densities multiple times that of current lithium-ion batteries. Where does the energy come from, if not the electrode reactions? That goes unexplained in the paper.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

O-MG, the Developer Preview of Android O is here!

Posted by Dave Burke, VP of Engineering

Since the first launch in 2008, the Android project has thrived on the incredible feedback from our vibrant ecosystems of app developers and device makers, as well as of course our users. More recently, we've been pushing hard on improving our engineering processes so we can share our work earlier and more openly with our partners.

So, today, I'm excited to share a first developer preview of the next version of the OS: Android O. The usual caveats apply: it's early days, there are more features coming, and there's still plenty of stabilization and performance work ahead of us. But it's booting :).

Over the course of the next several months, we'll be releasing updated developer previews, and we'll be doing a deep dive on all things Android at Google I/O in May. In the meantime, we'd love your feedback on trying out new features, and of course testing your apps on the new OS.

What's new in O?

Android O introduces a number of new features and APIs to use in your apps. Here's are just a few new things for you to start trying in this first Developer Preview:

Background limits: Building on the work we began in Nougat, Android O puts a big priority on improving a user's battery life and the device's interactive performance. To make this possible, we've put additional automatic limits on what apps can do in the background, in three main areas: implicit broadcasts, background services, and location updates. These changes will make it easier to create apps that have minimal impact on a user's device and battery. Background limits represent a significant change in Android, so we want every developer to get familiar with them. Check out the documentation on background execution limits and background location limits for details.

Notification channels: Android O also introduces notification channels, which are new app-defined categories for notification content. Channels let developers give users fine-grained control over different kinds of notifications — users can block or change the behavior of each channel individually, rather than managing all of the app's notifications together.

Notification channels let users control your app's notification categories

Android O also adds new visuals and grouping to notifications that make it easier for users to see what's going on when they have an incoming message or are glancing at the notification shade.

Autofill APIs: Android users already depend on a range of password managers to autofill login details and repetitive information, which makes setting up new apps or placing transactions easier. Now we are making this work more easily across the ecosystem by adding platform support for autofill. Users can select an autofill app, similar to the way they select a keyboard app. The autofill app stores and secures user data, such as addresses, user names, and even passwords. For apps that want to handle autofill, we're adding new APIs to implement an Autofill service.

PIP for handsets and new windowing features: Picture in Picture (PIP) display is now available on phones and tablets, so users can continue watching a video while they're answering a chat or hailing a car. Apps can put themselves in PiP mode from the resumed or a pausing state where the system supports it - and you can specify the aspect ratio and a set of custom interactions (such as play/pause). Other new windowing features include a new app overlay window for apps to use instead of system alert window, and multi-display support for launching an activity on a remote display.

Font resources in XML: Fonts are now a fully supported resource type in Android O. Apps can now use fonts in XML layouts as well as define font families in XML — declaring the font style and weight along with the font files.

Adaptive icons: To help you integrate better with the device UI, you can now create adaptive icons that the system displays in different shapes, based on a mask selected by the device. The system also animates interactions with the icons, and them in the launcher, shortcuts, Settings, sharing dialogs, and in the overview screen.

Adaptive icons display in a variety of shapes across different device models.

Wide-gamut color for apps: Android developers of imaging apps can now take advantage of new devices that have a wide-gamut color capable display. To display wide gamut images, apps will need to enable a flag in their manifest (per activity) and load bitmaps with an embedded wide color profile (AdobeRGB, Pro Photo RGB, DCI-P3, etc.).

Connectivity: For the ultimate in audio fidelity, Android O now also supports high-quality Bluetooth audio codecs such as LDAC codec. We're also adding new Wi-Fi features as well, like Wi-Fi Aware, previously known as Neighbor Awareness Networking (NAN). On devices with the appropriate hardware, apps and nearby devices can discover and communicate over Wi-Fi without an Internet access point. We're working with our hardware partners to bring Wi-Fi Aware technology to devices as soon as possible.

The Telecom framework is extending ConnectionService APIs to enable third party calling apps integrate with System UI and operate seamlessly with other audio apps. For instance, apps can have their calls displayed and controlled in different kinds of UIs such as car head units.

Keyboard navigation: With the advent of Google Play apps on Chrome OS and other large form factors, we're seeing a resurgence of keyboard navigation use within these apps. In Android O we focused on building a more reliable, predictable model for "arrow" and "tab" navigation that aids both developers and end users.

AAudio API for Pro Audio: AAudio is a new native API that's designed specifically for apps that require high-performance, low-latency audio. Apps using AAudio read and write data via streams. In the Developer Preview we're releasing an early version of this new API to get your feedback.

WebView enhancements: In Android Nougat we introduced an optional multiprocess mode for WebView that moved the handling of web content into an isolated process. In Android O, we're enabling multiprocess mode by default and adding an API to let your app handle errors and crashes, for enhanced security and improved app stability. As a further security measure, you can now opt in your app's WebView objects to verify URLs through Google Safe Browsing.

Java 8 Language APIs and runtime optimizations: Android now supports several new Java Language APIs, including the new java.time API. In addition, the Android Runtime is faster than ever before, with improvements of up to 2x on some application benchmarks.

Partner platform contributions: Hardware manufacturers and silicon partners have accelerated fixes and enhancements to the Android platform in the O release. For example, Sony has contributed more than 30 feature enhancements, including the LDAC codec, and 250 bug fixes to Android O.

Get started in a few simple steps

First, make your app compatible to give your users a seamless transition to Android O. Just download a device system image or emulator system image, install your current app, and test -- the app should run and look great, and handle behavior changes properly. After you've made any necessary updates, we recommend publishing to Google Play right away without changing the app's platform targeting.

Building with Android O

When you're ready, dive in to O in depth to learn about everything you can take advantage of for your app. Visit the O Developer Preview site for details on the preview timeline, behavior changes, new APIs, and support resources.

Plan how your app will support background limits and other changes. Try out some of the great new features in your app -- notification channels, PIP, adaptive icons, font resources in XML, autosizing TextView, and many others. To make it easier to explore the new APIs in Android O, we've brought the API diff report online, along with the Android O API reference.

Coming later today, the latest canary version of Android Studio 2.4 includes new features to help you get started with Android O. When this update is available, you can download and set up the O preview SDK from inside Android Studio, then use Android O's XML font resources and autosizing TextView in the Layout Editor. Watch for more Android O support coming in the weeks ahead.

We're also releasing an alpha version of the 26.0.0 support library for you to try.

Preview updates

The O Developer Preview includes an updated SDK with system images for testing on the official Android Emulator and on Nexus 5X, Nexus 6P, Nexus Player, Pixel, Pixel XL and Pixel C devices. If you're building for wearables, there's also an emulator for testing Android Wear 2.0 on Android O.

We plan to update the preview system images and SDK regularly throughout the O Developer Preview. This initial preview release is for developers only and not intended for daily or consumer use, so we're making it available by manual download and flash only. Downloads and instructions are here.

As we get closer to a final product, we'll be inviting consumers to try it out as well, and we'll open up enrollments through Android Beta at that time. Stay tuned for details, but for now please note that Android Beta is not currently available for Android O.

Give us your feedback

As always, your feedback is crucial, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We've moved to a more robust tool, Issue Tracker, which is also used internally at Google to track bugs and feature requests during product development. We hope you'll find it easier to use.



Read the full article here by Android Developers Blog

Facebook may show off its hardware efforts in April

When Facebook launched its hardware-focused Building 8, it raised all kinds of questions: just what was it making in there, and when would you see the first fruits of its labor? You might not have to wait long to get the details. Sources speaking to Business Insider claim to have a broad overview of not only what Building 8 is creating, but when you might get to see it. The team is reportedly working on four advanced technology projects, and is poised to play a key role in Facebook's F8 conference in April. In other words, some of its secretive work would be revealed in a matter of weeks.

Clues to some of it have surfaced in the recent past. There's already been talk of brain-scanning tech, and recent hires hint at some kind of drone (it's not clear that this is related to the existing internet drone project). Others, however, are relatively fresh. One would focus on augmented reality, according to the insiders, while another is headed up by a Stanford cardiologist with knowledge of "early-stage medical device development."

Building 8's leader, ex-Googler Regina Dugan, may have even given a clue as to the augmented reality project. Not long ago, she wrote two posts decrying the inability to truly connect with people who are far away, and was "optimistic" that "hardware advances" could help solve the problem. An augmented reality system that makes it feel like others are in the room, perhaps?

The company hasn't confirmed anything, so it's still a good idea to take this with a grain of salt even if it's accurate. Development teams can miss schedules or cancel projects entirely, after all. If there's any truth here, though, Facebook is about to take a big step toward becoming a multi-product hardware company that isn't just focused on VR.

Source: Business Insider



Read the full article here by Engadget

Monday, 20 March 2017

WebGL 2.0 Support Now Available in Chrome

You may soon notice a boost to 3D web graphics while using Chrome: Google has announced that the desktop version now fully supports WebGL 2.0. Improvements include faster performance, new texture types, and visual effects. Firefox and Opera support this already, but this is a bigger deal since Chrome has 60% of the market share. Edge is still stuck with WebGL 1.0. WebGL 2.0 is a major upgrade to the API which unlocks a variety of new graphics features and advanced rendering techniques. WebGL 2.0 makes it even easier to build 3D web applications, with faster real-time rendering, new types of textures and shaders, and reduced video memory consumption. Techniques including deferred shading, tone mapping, volumetric effects, and particle effects can now be efficiently implemented. The new APIs also bring WebGL up to feature parity with OpenGL ES 3.0, a graphics platform commonly used in mobile games. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Vomiting Emoji

My favorite might be U+1F609 U+1F93F WINKING FACE VOMITING.

Read the full article here by xkcd.com

Saturday, 18 March 2017

Rare Nuclear Test Films Saved, Declassified, and Uploaded to YouTube

A weapons physicist and his team has rescued and restored a plethora of deteriorating nuclear-test films and have uploaded them for everyone to enjoy. Around 750 have been declassified, and there is plenty more to come, as 6,500 films (out of an estimated 10,000) have been found. Check out the playlist for over 60 nuclear explosions. The U.S. conducted 210 atmospheric nuclear tests between 1945 and 1962, with multiple cameras capturing each event at around 2,400 frames per second. But in the decades since, around 10,000 of these films sat idle, scattered across the country in high-security vaults. Not only were they gathering dust, the film material itself was slowly decomposing, bringing the data they contained to the brink of being lost forever. For the past five years, Lawrence Livermore National Laboratory (LLNL) weapon physicist Greg Spriggs and a crack team of film experts, archivists and software developers have been on a mission to hunt down, scan, reanalyze and declassify these decomposing films. Discussion

Read the full article here by [H]ardOCP News/Article Feed

PC Building Simulator Is a Game about Building a PC

In a serious case of Inception, you can now download a game that lets you build a gaming PC on your gaming PC. This is the perfect title for most of you here, who have absolutely no experience with that sort of thing. Interestingly, there will even be a career mode where you make an attempt to make a living as a custom PC builder. True to the legacy of classic flight sims or the more modern trucking simulators, PC Building Simulator is relatively realistic when it comes to simulating putting together a gaming PC. Sure, all you're doing is selecting parts from a menu and clicking them into place, but just as flight sims make you flip all the switches and turn all the knobs, PCBS has you screwing in motherboard standoffs and manually connecting MOLEX cables. Discussion

Read the full article here by [H]ardOCP News/Article Feed

Friday, 17 March 2017

Google Data Studio

Google Data Studio

Data Studio lets you create dynamic, visually compelling reports and dashboards.



Read the full article here by MoMB

Karting | Al via a Corridonia il Trofeo di Primavera

Sulla Pista Cogiskart di Corridonia inizia il 19 marzo il Campionato Regionale della 5° Zona. Hanno già visto la partenza...

Read the full article here by FormulaPassion.it

WebAssembly è più performante di asmJS?

IBM open sources the API Microgateway

I am thrilled to announce the The IBM API Microgateway is now open source and available on GitHub. The API Microgateway has two powerful features developers will love:

  • A Node.js extensible and dynamic API policy flow engine based on Swagger that provides built-in policy support for OAuth 2.0 and rate limiting
  • NGINX industry-proven reverse proxy capabilities combined with Node.js highly scalable event-driven architecture

Why use an API gateway?

API gateways play an integral role to protect, enrich, and control access to API services. They allow you to architect your application in a way that provides clear separation between your business logic and security logic. In the long run, this will reduce application errors and provide a common reuse layer across your application.

Diagram of an API gateway

The API Microgateway policy flow engine 

Developers can use policy constructs (if / switch) to create visual policy flows, transform payloads, and invoke backends. When you need to roll up your sleeves and apply your own logic, the JavaScript policy provides ultimate flexibility to enrich the payload. The API Microgateway provides developers a first-class framework for building your own gateway solution to meet your API needs.

Building a gateway solution

Our community-first approach

The IBM Gateway team has over 15 years of experience in building gateway solutions. Our gateway offerings range from DataPower, an enterprise-proven gateway solution with comprehensive built-in policies, to the lightweight, infinitely extensible, open source The API Microgateway. 

We are excited to contribute our gateway expertise back to the community. The Node.js community is one of the most active and vibrant developer communities, and we are confident that with the help of the Node.js community, we can further evolve The API Microgateway.

For more information, check out The API Microgateway on GitHub.

The post IBM open sources the API Microgateway appeared first on developerWorks Open.



Read the full article here by developerWorks Open

Is Microsoft blocking Windows 7/8.1 updates on newer hardware?

A year ago, Microsoft revealed that Windows 10 would be the only Windows platform to support nextgen processors like Intel's Kaby Lake, AMD's Bristol Ridge, and Qualcomm’s 8996. The message then -- as now -- was clear: If you want to run a nextgen processor, you'll need Windows 10. Last week, Microsoft published KB 4012982, with the title "'Your PC uses a processor that isn’t supported on this version of Windows' error when you scan or download Windows updates", suggesting that the restriction was now being enforced. SEE ALSO: Here's how to be among the first to get the Windows… [Continue Reading]


Read the full article here by Betanews

Thursday, 16 March 2017

Four short links: 16 March 2017

Werewolf AI, Board Games, Coin Tossing, and Glitch Platform

  1. Towards Deception Detection in a Language-Driven Game (PDF) -- This paper focuses exclusively on how the Explanation Generator generates hypotheses for the actions of human players based on observations of their conversational utterances. Werewolf is their test data. I do not think it is wise to teach the softwares to play Werewolf.
  2. CIA Trains Officers with Board Games (Ars Technica) -- where are the software/startup simulation board games? (via BoingBoing)
  3. The Impact of a Coin Toss on Major Life Decisions and Subsequent Happiness (PDF) -- Those who flipped heads were approximately 25% more likely to report making a change than those who got tails.
  4. Glitch -- sweet collaboratively edited code for web apps, with View Source, but clearly laying a path to being commercial PaaS. Neat.

Continue reading Four short links: 16 March 2017.



Read the full article here by Four Short Links - O'Reilly Media

Wednesday, 15 March 2017

Sudo (Board Game)



Read the full article here by turnoff.us - geek comic site

Best-Tasting Colors

I recognize that chocolate is its own thing on which reasonable people may differ. Everything else here is objective fact.

Read the full article here by xkcd.com

Android: supporto nativo per Java 8, addio alla Jack toolchain

Engineers On Google's Self-Driving Car Project Were Paid So Much That They Quit

According to a new report from Bloomberg, most of the money Google spent on it self-driving car project, now spun off into a new entity called Waymo, has gone to engineers and other staff. While it has helped retain a lot of influential and dedicated workers in the short run, it has resulted in many staffers leaving the company in the long run due to the immense financial security. The Verge reports: Bloomberg says that early staffers "had an unusual compensation system" that multiplied staffers salaries and bonuses based on the performance of the self-driving project. The payments accumulated as milestones were reached, even though Waymo remains years away from generating revenue. One staffer eventually "had a multiplier of 16 applied to bonuses and equity amassed over four years." The huge amounts of compensation worked -- for a while. But eventually, it gave many staffers such financial security that they were willing to leave the cuddly confines of Google. Two staffers that Bloomberg spoke to called it "F-you money," and the accumulated cash allowed them to depart Google for other firms, including Chris Urmson who co-founded a startup with ex-Tesla employee Sterling Anderson, and others who founded a self-driving truck company called Otto which was purchased by Uber last year, and another who founded Argo AI which received a $1 billion investment from Ford last week.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Tuesday, 14 March 2017

Gabe Newell: Valve is thinking how to distribute Source 2

Gabe Newell: Valve is thinking how to distribute Source 2

Valve Boss Gabe Newell has said in a video interview that the company is thinking about how it will start delivering Source 2 to those who want it, saying that for them it's "insanely useful" but mentioned that it probably won't be for everyone.

This interview was filmed by the Valve News Network and spotted by the sharp eyes at PCGamesN.

“If it’s useful for people, it’s available for them to use,” says Valve boss Gabe Newell. “For us, it’s insanely useful - it does exactly what we need an engine to do and it’s evolving in the directions that we think are important. It’s kind of an industrial-strength solution. For our developers it works great, for other developers it’s not nearly as useful as Unity. So it’s sort of like, it’s here if people want it.”

Newell explains that he views the Source engine's latest iteration as a tool in Valve's arsenal, and that it's part of Steam, so some parts of it will be useful for developers, and others won't.

"It’s not a way to make money for us, it’s something for developers to look at and think ‘wow, that’s exactly what I’m trying to do’.” said Newell.

Valve say in the video that they're working on getting Source 2 to the point it can be offered en-masse to developers that want it, but right now it's not a priority as they'd rather work on their own projects first.

There's a lot of information in the video above, I'd recommend checking it out for words direct from the man himself.

 

 



Read the full article here by Develop Feed

Google reveals Jamboard price and release date

If you're looking for an alternative to Microsoft's Surface Hub, Google has just the thing for you. Unveiled last October, Jamboard is a 55-inch whiteboard featuring a 4K touchscreen display, HD camera, speakers and integration with the search giant's G Suite productivity apps. Google did not initially reveal how much it will cost or when it will be available, but the company now says that it will be priced just shy of $5,000 and will be released in just a couple of months. More specifically, the Jamboard will be available to purchase starting this May for $4,999. There is no exact… [Continue Reading]


Read the full article here by Betanews

C++17 Is Near, A Look At The New Features

Reports out over the weekend indicate that C++17 is "done" as the next version of the C++ ISO standard...

Read the full article here by Phoronix

Four short links: 13 March 2017

Attention Prioritization, Event Sourcing, Containerized Dropbox, and Player Modeling

  1. ASAP: Automatic Smoothing for Attention Prioritization in Time Series -- automatically smooths time series plots to remove short-term noise while retaining large-scale deviations.
  2. PumpkinDB -- Event sourcing database engine that doesn't overwrite data.
  3. Run Dropbox in a Container -- keep its grubby fingers off your account.
  4. Ethical Considerations in Player Modeling -- We source categories of ethical issues in the application of artificial intelligence (AI) from work on AI ethics, and using these, we provide several specific examples of ethical issues in player modeling. Building from the examples, we suggest establishing a framework for understanding ethical issues in player modeling, and we propose a number of methodological approaches to address the identified challenges.

Continue reading Four short links: 13 March 2017.



Read the full article here by Four Short Links - O'Reilly Media

Alphabet's Jigsaw Wants To Explain Tech Jargon To You, Launches Sideways Dictionary

It might sound obvious, but the thing about tech is that sometimes it can get really, well, technical. From a report on CNET: So Alphabet wants to help make nitty-gritty tech jargon simpler to explain to the masses. On Tuesday, Jigsaw, a tech incubator owned by Google's parent company, launched a website called the Sideways Dictionary that takes jargon and puts it into terms normal people would understand. Jigsaw partnered with the Washington Post to build the tool.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Teams is Microsoft's most intriguing productivity app yet

Group collaboration software isn't anything new, but in recent years we've seen an explosion of new solutions aiming to redefine how teams work. There's Slack, of course, which has spread across startups and big organizations like wildfire. But even before that, companies relied on things like Hipchat, Yammer and plain old IRC. When Microsoft first unveiled its own offering last fall, the not-so-subtly named Teams, it initially seemed like the software giant was just jumping on the latest productivity bandwagon. It also seemed a bit redundant, since Microsoft owns Yammer. But it quickly became clear that Microsoft had some big ideas in store.

Today marks the next major step for Microsoft Teams: It's opening up to all Office 365 commercial organizations, a figure that includes more than 85 million monthly users. It's also packing in several new features, like the ability to schedule meetings without leaving the Teams interface. As it evolves, Teams is looking more and more like the ideal productivity solution from Microsoft, wrapping in elements from other Office 365 apps while also delivering entirely new ways to work together.

On the surface, Microsoft Teams looks like a slight twist on the Slack formula: a row of rooms on the left pane, and an fast-scrolling wall of text on the right side. But Teams quickly differentiates itself. Threaded messaging is core to the app's experience, for example, allowing you to quickly browse and jump into conversations without confusing them with later messages. That's something Slack has taken three years to release, and its implementation also feels like an afterthought. And on top of the message window is a row of tabs that points to things like shared files, a team wiki and whatever else you'd like.

As for what's new since we first saw Teams, Microsoft has added features like the ability to continue email conversations by mailing Teams groups, and integrating bots and other app connections. The company says more than 150 software integrations will be headed to teams soon, including Growbot and Trello. So, instead of having a separate app or browser window open to check on your group's Trello list progress, you can simply make it a tab atop a Teams room's chat window.

As a fairly dedicated Slack user, I was surprised by just how thoughtfully designed Teams was after using it for a few days. It's easier to follow threaded conversations, since they actually look like threads instead of weird off-shoot chats. It's far simpler to start video chats with teammates, and you can even schedule video meetings from within rooms. And given that Teams supports video chats with up to 80 people, it could conceivably end up replacing conference calls for some workers. Even the mere act of creating new channels seems a lot more fluid than Slack, reflecting the fact that working groups tend to evolve quickly over time. A private chat between a few colleagues could end up becoming a channel to house conversations around a new corporate project, for instance.

On the mobile front, the Teams iOS and Android apps are both clean and well designed. Oddly enough, the Android app is the most advanced version, since it supports video and audio calls. That's something both Windows Phone and iOS will have to wait for.

Microsoft is quick to admit that Teams won't be the perfect solution for every time. But the company is positioning Office 365 as a bouquet of options for any group hoping to work together. If you don't need the rich collaboration features of Teams, you can just rely on Yammer, Word, Excel and other traditional Office apps. Given that Teams won't cost Office 365 customers anything extra, though, it has a good chance of seeing some quick pickup. One of Slack's biggest criticisms is that it's expensive, so having an app built into a product suite you might already own seems instantly more compelling.



Read the full article here by Engadget

Godot 3.0 Is Working On WebAssembly + WebGL 2.0 Exporting

The open-source Godot game engine continues working towards their 3.0 major release...

Read the full article here by Phoronix

So you want to be a wizard

Today I did the opening keynote at SRECon. This talk was a little less technical than my normal talks: instead of talking about tools like tcpdump (though tcpdump makes an appearance!), I wanted to talk about how to make a career where you’re constantly learning and how to be good at your job whether or not you’re the most experienced person.

Here’s the talk abstract, then the slides & a rough transcript. I’ve included links to every resource I mentioned.

I don't always feel like a wizard. Like many of you, I've been doing operations for a couple of years, and I still have a TON TO LEARN about how to do this "SRE" job.

But along the way, I have learned a few ways to debug tricky problems, get the information I need from my colleagues, and get my job done. We're going to talk about

  • how asking dumb questions is actually a superpower
  • how you can read the source code to programs when all other avenues fail
  • debugging tools that make you FEEL like a wizard
  • and how understanding what your _organization_ needs can make you amazing
At the end, we'll have a better understanding of how you can get a lot of awesome stuff done even when you're not the highest level wizard on your team.

(this transcript is nowhere near totally faithful; there’s a fair amount of “what i meant to say / what I said, kind of, I think” in here :) )

You can click on any of the slides to see a bigger version.

This talk is called "so you want to be a wizard". The main problem with being a wizard is that, of course, computers are not magic! They are logical machines that you can totally learn to understand.
So this talk is actually going to be about learning hard things and understanding complicated systems.
I work as an engineer at Stripe. (this is the job description for my job).
My team is in charge of a ton of things. Every so often I find out about a new thing that we're in charge of ("oh, there's a GPG keyserver we depend on? Okay!!")
What this means is that I (like many of you!) need to know about a ton of different systems. There are about a million things to know about Linux & networking, the AWS platform is really complicated and there's a ton to know about how it works exactly.

And there's a seemingly neverending amount of new technology to learn about. For instance we're looking at Kubernetes, and to operate a Kubernetes cluster you need to operate etcd, which means that you need to understand a bunch of distributed systems concepts to make sure you're doing it right.
So to do my job effectively, like many of you, I need to constantly learn new things. This talk is about how to do that, and why I like it.
Here are the wizard skills we're going to be discussing in this talk!
In software engineering, I think it's really important to understand both the systems that are a little higher-level than you and lower-level systems.

In reliability engineering, what lives below us is typically "systems stuff" like operating systems & networking. Above us is stuff like business requirements & the programs we're trying to make run reliably.

This talk is mostly going to be about understanding lower-level systems, but we're also going to talk a little about humans and how to make sure you're actually building the right thing :)
As a quick aside, I think understanding computer networking is so important that I wrote a whole zine about it, which you can pick up at the end of this talk.
So -- why is it important to understand the systems you work with?
I think there are 3 main important reasons:

First, understanding jargon is really useful. If someone says "hey, this process got killed by the OOM killer" it's useful to know what that means! (we're going to talk about what an OOM killer is later)

second, it lets you debug harder problems. When I set up a web server (Apache) for the first time, maybe 8 years ago, I didn't understand the HTTP protocol very well and I didn't understand what many of the configuration options I was using meant exactly.

So I would normally debug by Googling things and trying random fixes. This was a pretty viable strategy at the time (I got my webservers working!) but today when I configure webservers, it's important for me to actually understand what I'm doing and exactly what effect I expect it to have. And now I can fix problems much more easily.

rachelbythebay is a great collection of debugging stories, and it's clear throughout that she has a really deep understanding of the systems she works with.

The last reason is -- having a solid understanding of the systems you work with lets you innovate. I think Docker is a cool example of this. Docker was not the first thing to ever use namespaces (one of the kernel features that people call "containers"), but in order to make a tool that people loved to use, the Docker developers had to have a really good understanding of exactly what features Linux has to support isolating processes from others.
A system like Linux seems really intimidating at first, especially if you want to understand some of the internals a little bit. It's like 4 million or 10 million lines of code or something.
So let's talk about how to break off pieces of knowledge one at a time so that you can tackle the challenge!
My first favorite thing to do is learn fundamental concepts.

This is incredibly useful -- in networking, if you know what a packet is and how it's put together, then it really helps to tackle other more complicated concepts.

Let me tell you a quick story about how I learned what a system call was.
The Recurse Center is a 12 week programming retreat in New York where you can go to learn fun new things about programming. I went 3 years ago. RC is about learning whatever interests you (low level stuff? fancy frontend tricks? functional programming? making cool art with programming?), and I went partly with the goal of understanding operating systems better.

When I got to RC, I learned about the concept of a "system call"! (here's the blog post I wrote the day I learned that). System calls are how applications talk to the operating system. I felt kind of sad that I didn't know about them before, but the important thing was that I learned it! That's exciting!
This is the only piece of homework in this talk :)

TCP is the protocol that runs a lot of the internet that we use day to day. Often it "just works" and you don't need to think about it, but sometimes, well, we do need to think about it! So it's helpful to understand the basics.

The way I started learning about TCP was, I wrote a TCP stack in Python! This was really fun, it didn't take that long, and I learned a ton by doing it and writing up what I learned.
I also like to do experiments.
You can make your laptop run out of memory on purpose! I would show you what happens (remember the "OOM killer"? it's a system in the Linux kernel that starts just killing programs on your computer!), but I think it might not be a good live demo for a talk :).

I think doing this kind of experiment is awesome because servers run out of memory in production, and it's cool to see what that looks like and how to reason about it in a safer environment. (hint: if you run "dmesg" and search for "oom" it will show you OOM killer activity)
Also at the Recurse Center, I decided I wanted to write a tiny operating system in Rust. It turns out that writing an OS in 3 weeks when you don't know Rust or operating systems is hard, so I ended up writing a keyboard driver.

I learned SO MUCH by doing this -- you can read more about it here
These are the programming experiment rules I like to follow.
Sometimes I like to read books! Two books I've learned from in the last couple years are Networking for System Administrators & Linux Kernel Development. Networking for System Administrators is written for system administrators who want to be able to do basic networking tasks without having to ask their networking team. I'm not a system administrator, and I don't have a networking team, but I learned a ton by reading this book.
Another thing that I find super helpful is to try to read things or watch talks that are too hard for me at the time. For example, aphyr has an amazing series of posts about distributed systems failures (the ones called "Jepsen"). When I started reading these posts, I honestly didn't understand them very well. I didn't understand what "linearizable" meant, and I'd never worked with distributed databases. So sometimes I'd read a post and only understand maybe 20% of it.

As I learned more and came back to his writing, I was able to understand more of it! I'm still not a distributed systems expert, but I'm happy I tried to read these posts even when I didn't understand them well.

That Linux kernel development book I mentioned is kind of similar. Its goal is to give you the tools you need to become a Linux kernel developer. I am not a Linux kernel developer (or at least not yet!). But I've learned a few interesting things by reading this book.
Another maybe obvious tactic is to work with the thing in your job. Recently I needed to add some logging to a HTTP proxy we had. This was a relatively mundane task, but I learned more about how HTTP proxies work exactly by doing it! That was cool!

It's useful for me to remember that I can learn something even when I'm doing work which is sort of routine.
This is the last, and maybe most important thing. I have models of how a ton of systems work in my head. Sometimes what happens on the computers work with does not match what my model says!

As a small example -- recently we had a computer that was swapping even though it had 16GB of free memory. This did not match my mental model ("computers only swap memory to disk when they're out of memory"). Obviously there was something wrong with my model. So I investigated, and I learned a couple new things about how swap works on Linux!
It turns that there are actually at least 4 reasons a Linux box might swap.

1. You could be actually out of RAM.

2. You could be "mostly" out of RAM. The "vm.swappiness" sysctl setting controls how likely your machine is to swap. This isn't what was happening to us, though.

3. A cgroup could be out of RAM, which was what was happening to us at the time (here's the blog post I wrote about that).

4. There's also a 4th reason I learned about afterwards: if you have no swap, and your vm.overcommit_ratio is set to 50% (which is the default), you can end up in a situation where only half your RAM can be used. That's no good! here's a post about overcommit on Linux
So it turns out understanding swap isn't actually that simple. In fact, there's a cool 200-page book Understanding the Linux Virtual Memory Manager. It also has a bunch of annotated kernel code that handles memory management, which is awesome.

I'm happy I dug in a bit because now I understand how this part of Linux works better!
So even getting to understand something that seems relatively basic like "when does a computer start swapping?" can take a while! There's a lot to know, and it's totally okay to not know it all right away.
A quick story on my Linux journey:

In 2003, when I was 15, my mom bought me a shiny new computer. I was really excited about Linux, so I installed a ton of different Linux distributions. (also, thanks to my mom!! I'm super lucky to have had a computer that I could bork repeatedly and the time & space to do tons of experiments)

Around 2009, in university, I was one of the sysadmins for a small lab of 7 Linux & Windows computers. The old sysadmin said "hey, want to help out?", gave me the root password, and we muddled our way through getting the computers to work for a bunch of math undergrads.

In 2013 I learned what a system call is and a bunch of basic things about operating systems! This was super awesome. (here's everything I learned at the Recurse Center
< And now I'm still continuing to learn.
The next wizard skill I'm going to talk about is asking great questions!
A (great) situation I end up in a lot is where I have a coworker who knows something that I want to know, and they want to help me, and I just need to figure out the right questions to ask to get the answers I want!

Asking good questions is really important because people in general cannot just magically guess what I want them to tell me.
One of my favorite tricks is to state what I know, as a way to frame my question.
Stating what I know is awesome because it helps me organize my thoughts, reveals misunderstands (me: "I know X", them: "that not quite right!"), and helps me avoid answers that are too basic (yes yes yes i know that!) and too advanced (NO PLEASE BACK UP 30 STEPS FIRST).
When asking a question, it's pretty natural to want to ask the most experienced person around your question. They will probably know the answer, which is good! But I don't think it's the best strategy.

Instead, I instead try to remember to ask a less experienced person, who I think will still know the answer.
This is awesome because it reduces the load on the more-experienced person. But there's more reasons this is great! I'm not the most experienced member of my team. I love it when people ask me questions because -- if I don't know the answer to their questions, then I can find out, and I can grow my own knowledge.

So not asking the most experienced person is actually a cool way to show trust in less experienced team members, reduce the bus factor, and spread knowledge around.
Doing research is great! It lets me ask more complicated and interesting questions!
I really like to ask questions that are relatively easy to answer. yes/no questions are a really good way to accomplish this! And often an interesting yes/no question can lead to a great discussion.
When debugging or fixing things, often you can end up in a situation where someone who's super experienced knows how to Do A Thing, and other people on the team don't know how.

And often they have trouble remembering all the details to document them! So I like to (right after someone did something) to ask them to explain exactly what they did, or to ask if I can watch while they do it.
The last thing I have to say about asking about questions, especially to senior engineers / managers / leaders is -- please ask questions in public. I find that it's much easier for senior people to admit that they don't know something (because everybody knows you're competent already!), and doing that really creates space for everyone to ask questions.
Okay, let's talk about reading code!
Sometimes error messages are not particularly helpful. If you go read the code around where the error message got printed, sometimes you can get a better clue about what's going on!
What's more exciting to me, though, is to read the code when software is poorly documented (which happens all the time, especially when it's changing frequently or isn't used by very many people)
I want to emphasize that reading code isn't just for small projects that you're familiar with, though.

In my first job, I was writing plugins to make websites with Drupal, a PHP content management system. Once I remember I had a really specific question about how some Drupal thing worked. It wasn't documented, and there were no results on Google when I looked.

I asked my boss at the time if he knew and he told me "julia, you just have to go read the code and find out how it works!". I was a bit unsure about how to approach it ("there's so much code") but he pointed me to the relevant part of the Drupal codebase, and, sure enough, I could see the answer to my question there!

Since then I've looked at the code for a bunch of large open source questions to answer questions (nginx! linux!) and even if I'm not a super good C programmer, sometimes I can figure out the answer to my question.
Now we're going to talk about one of my favorite things! Debugging!
Let's tell a story! One day we had a client that was making a HTTP request, and it wasn't getting a response for 40 milliseconds. That's a long time!
Why is that a long time? The client and the server were on the same computer. And I expected the server to be fast, so there was no reason for a 40ms delay.
As an aside, 40ms synchronously is 25 requests per second, which is really not a lot. It's easy to see how this kind of delay could become a problem quickly.
I captured some packets with Wireshark to figure out who I should be blaming -- the client or the server!

We found out that the client would send the HTTP servers, wait 40ms, and then send the rest of the request. So the server wasn't the problem at all! But why was the client doing this? It's written in Ruby, and initially I maybe thought we should just blame Ruby, but that wasn't a really good reason (40ms is a very long time, even in Ruby).
It turned out what was happening was a bad interaction between two TCP optimizations -- delayed ACKs, and Nagle's algorithm. When the client sent the first packet, the server would wait to send an ACK (becaues of the delayed ACKs algorithm), and the client was waiting for that ACK (because of Nagle's algorithm).

So they were stuck in this kind of passive-aggressive-waiting situation.

I wrote a blog post about this called Why you should understand (a little) about TCP if you want to know more.
When we set the TCP_NODELAY socket option, it stopped the client from waiting, and then everything got fast!
A while ago I realized I felt like I'd gotten a lot better at debugging since my first job, and I came up with some reasons I think it got easier!
Sometimes when I hit a bug, especially a nondeterministic and difficult to reproduce bug, it’s tempting to think “oh you know, things just happen, who knows”. But everything on a computer does in fact happen for a logical reason (however much the computer may try to convince you otherwise). Reminding myself of that helps me fix bugs. Also known as “OK JULIA IT IS NOT FAIRIES WHAT ACTUAL REASON COULD BE CAUSING THIS?”
Next up, confidence!
A while ago I dealt with a performance problem in a Hadoop job at work that took me 2 weeks to fix (see a millisecond isn’t fast). If I hadn’t been able to fix it, I would have felt pretty bad and like it was a waste of 2 weeks.

But we were processing a relatively small number of records, and it was taking 15 hours to do it, and it was NOT REASONABLE and I knew that the job was too slow. And I figured it out, and now it’s faster and everyone is happy.
From that, I learned that floating point exponentiation is slow, and that 1000 records/second isn't really a lot.
The job was processing 1000 records/second. I found this hard to think about at the time though -- was that a lot? not a lot? How was I supposed to know?

So I decided I wanted to take some time to train my intuitions about how fast different computer operations should be.
I made this game called computers are fast with my partner Kamal. You can go play it online, and we're going to play it now a little bit!
Suppose you have an indexed database table, with 10 million rows in it. How long does it take to select a row from that table? How many times per second can you do that?

The goal isn't to know exactly, but I think it's useful to be right up to an order of magnitude. So can you do it 100 times in a second? 10,000? 10 million times?
It turns out the answer on my laptop is 55,000 times! (or, it takes about 20 microseconds, in Python)
It's also been incredibly helpful for me to have better tools for answering questions about your programs!

When I started out, I didn't have very good tools! But now I know about all kinds of profilers! I know about strace, and tcpdump, and way more tools for figuring out what's going on. It makes a huge difference.
I wrote a whole zine about debugging tools that have helped me answer questions. You can read it here: Linux debugging tools you'll love
But maybe the most important thing is that I learned to like debugging! I used to get grumpy when I ran into bugs. I felt like they were just getting in my way!

But these days, when I run into a mysterious bug, I think it's kind of fun! I get to improve my understanding of the systems I work with, which is awesome!
Now we're going to zoom out a bit from talking about networking and microseconds, and talk about how to design engineering projects. This is something that's really helped me a lot!
There are a lot of words for design document, but they're basically all the same idea -- you write down words about what work you're going to do before doing the work.
When I started at Stripe I thought writing stuff down was kind of dumb. Why couldn't I just start working on the project?

But since then, I've learned to find them really useful! (learning to like design documents)
I was worried that if I wrote a document, I'd either get WAY TOO MUCH feedback, or total silence.

One thing I learned is that it's helpful at first to just share a design I'm working with a few people. Like I'll show it to a couple of other people on my team, see what they think, and then make changes! It's not always necessary to ask every single person who might have an opinion what they think.
I also learned to like designing small projects. Recently I worked on a tiny project that just took about a week. My team lead asked me if I could quickly write up what we were going to do.

It took me maybe 45 minutes to write up the plan (super fast!), I showed it to a manager on another team, he had a couple of things he asked me to do differently, and he was SO HAPPY I'd written down a plan so that he understood what was going on. Awesome!

The small project went super smoothly and I was really happly I wrote up a thing about it first.
How do you know what to write in a design document? I really like to start by writing an announcement email, as if we just finished the project.

This is great because it forces me to articulate why the project is important (why did we spend all that time on it), how it's going to impact other teams ans what other people in the organization need to know about, and how we know that we actually met our goals for the project

The last thing is really important -- more often than I'd like to admit, I get to the end of a project and realize I'm not quite sure how we can tell whether the project is actually going to improve things or not. Planning that out at the beginning helps make sure that we put in the right metrics!
It's also useful to talk about risks! I actually haven't done this yet, but a cool idea I heard recently for figuring out risks was to do a "premortem" for your project. This is kind of the opposite of an announcement email -- instead, you imagine that the project failed 6 months down the line, and you're making sure you understand why it failed.
When I started writing designs, I used to worry a lot that my design would be wrong because things were going to change. It turns out that this is totally true -- designs rarely survive contact with the real world. Priorities change, you run into technical challenges you didn't expect, all kinds of things can go wrong.

But this doesn't mean it's not worth designing at all! I like writing down my assumptions explicitly because when things do change, I can go back and see what assumptions we had are no longer true, and make sure that we update everything we need to update. Having a record of changes is useful!
We've arrived at the last wizard skill!
Sometimes I'm working on something kind of boring, and I wonder like.. why am I doing this?

I usually find it possible to stay motivated if I can remember "ok, I'm spending hours working on configuring nginx, and this is boring, but it's in service of this really cool goal!"

But if I *don't* remember the goal (or what I'm working on actually doesn't make sense), it sucks.
The solution I'm working on to this right now is to approach project planning with the same kind of excitement and curiosity you might bring to a gnarly bug!

I'm trying to get better at saying "okay!!! this project! it has some slow and difficult pieces, so why is it so important? why are we going to feel awesome when it's done? which parts are the most important?"
I have a lot of autonomy in about what I get to work on, so when someone asks me to do something, I like to make sure I understand why it's important. Usually if I don't understand, the right thing to do is to just find out why it's important (usually it actually is!).

But sometimes the task I'm being given is only maybe 80% thought through, and when I go to understand the exact reason for doing it, it turns out that we don't need to do it at all! (or maybe we should actually be doing something completely different!)
And understanding the big picture helps me make better technical decisions!
Like a lot of people, I think a lot about the impact my work has and what I'm really doing here. Kelsey Hightower had a really amazing series of tweets today. here are a couple of quotes.

> I’ve yet to find the perfect job or thing to work on, but I have found a way to live a more meaningful life in tech.

> I now put people first. Regardless of the technology involved I gravitate towards helping people.

> People provide a much better feedback loop than computers or the abstract idea of a business.

> Everything I work on has a specific person or group of people in mind; this is what gives my work meaning; solving problems is not enough.
during this conference, I hope you ask a ton of questions to understand what's going on with this "SRE" thing better. There are so many amazing people to learn from!
I handed out fun networking zines at the end of this talk. If you want to read the zine, it's here: Networking! ACK!


Read the full article here by Julia Evans