Read more of this story at Slashdot.
Read the full article here by Slashdot
Read more of this story at Slashdot.
At a press conference held today in San Jose, Calif., Qualcomm Datacenter Technologies, Inc., a subsidiary of Qualcomm Incorporated (NASDAQ: QCOM), officially announced commercial shipment of the world's first and only 10 nanometer server processor series: the Qualcomm Centriq 2400 processor family. The Qualcomm Centriq 2400 processor family is the first high-performance Arm-based processor series designed to offer groundbreaking throughput performance for cloud workloads running in today's datacenters. Purpose built for cloud, the Qualcomm Centriq 2400 server processor family delivers exceptional performance-per-watt and performance-per dollar.
"Today's announcement is an important achievement and the culmination of more than four years of intense design, development and ecosystem enablement effort," said Anand Chandrasekher, senior vice president and general manager, Qualcomm Datacenter Technologies, Inc. "We have designed the most advanced Arm-based server processor in the world that delivers high performance coupled with the highest energy efficiency, enabling our customers to realize significant cost savings."
Qualcomm Centriq 2400 Processor Family Technical SpecsThe Qualcomm Centriq 2400 processor family is a single chip platform-level solution built using Samsung's 10 nanometer FinFET process with 18 billion transistors on only 398 mm². It contains up to 48 high-performance, 64-bit, single-thread cores, running at up to 2.6 GHz frequency. The cores are connected with a bi-directional segmented ring bus with 250GB/s of aggregate bandwidth to avoid performance bottlenecks under full load. To maximize performance under various use cases, the design has 512KB of shared L2 cache for every two cores, and 60 MB of unified L3 cache distributed on the die. It has 6 channels of DDR4 memory and can support up to 768 GB of total DRAM capacity with 32 PCIe Gen3 lanes and 6 PCIe controllers. The Qualcomm Centriq 2400 processor family also supports Arm's TrustZone secure operating environment, and supports hypervisors for virtualization. The Qualcomm Centriq 2400 is able to achieve exceptional performance, while consuming less than 120 watts.
With a list price of $1,995, the 48 core Qualcomm Centriq 2460 processor offers greater than 4X better performance per dollar and up to 45% better performance per watt versus Intel's highest-performance Skylake processor, the Intel Xeon Platinum 818.
A number of cloud service providers and technology companies participated in the launch event and demonstrated applications of an ARM-based datacenter running on the Qualcomm Centriq 2400 series. These included Alibaba, LinkedIn, Cloudflare, American Megatrends Inc., Arm, Cadence Design Systems, Canonical, Chelsio Communications, Excelero, Hewlett Packard Enterprise, Illumina, MariaDB, Mellanox, Microsoft Azure, MongoDB, Netronome, Packet, Red Hat, ScyllaDB, 6WIND, Samsung, Solarflare, Smartcore, SUSE, Synopsys, Uber, and Xilinx.
Optimized for common cloud workloads, the Qualcomm Centriq 2400 processor family has been designed to deliver throughput performance for highly threaded cloud native applications that are developed as micro-services and deployed for scale-out. Some of the key applications and use cases demonstrated today include:
Read more of this story at Slashdot.
The Clear Linux Distro is slickkkkk AF. Simple and minimal just the way I ❤️ it. Nice work Intel!https://t.co/JQ7SxkgjlK— jessie frazelle (@jessfraz) October 26, 2017
Read more of this story at Slashdot.
By Alex Kierstein
Supercapacitors, not batteries, power this wild creation.Lamborghini isn't known for bowing to convention. Ever since Ferruccio told Enzo Ferrari where he could stick his temperamental cars, and decided to build his own, it's been going its own way. This car, the Terzo Millennio ("third millenium"), built in collaboration with MIT, is very much a Lamborghini approach to the EV performance car. To begin with, it is obviously nothing else but a Lamborghini. Wild concept car touches aside, it's as wedgy and mean as anything to come out of Sant'Agata (and definitely anything to come out of Cambridge, MA!).
There's some advanced tech here, too, as you might imagine with MIT's involvement. The most notable is the supercapacitor energy storage technology. Supercapacitors aren't ready for primetime yet, being very expensive and not quite as energy dense, although currently they're used in a few niche automotive applications. But MIT and Lamborghini want to produce one that'll work more like a main battery, but with greater ability to recharge and discharge quickly. That's ideal for brutal, explosive acceleration. If Lamborghini and MIT can make a breakthrough here, it'd let the decidedly conventional Lamborghinis of today (naturally aspirated, non-electrified) take a leap into the future on Lamborghini's own terms.
The company is also exploring carbon composite batteries utilizing nanotechnology, which the company claims would reduce weight and increase the discharge capacity of the batteries. So, it seems, the Terzo Millennio might combine the two power storage technologies into the same drivetrain.
Of course, like most modern Lamborghinis there's all-wheel drive, although it's in-wheel electric motor-based rather than the conventional mechanical type. It'd be easy enough to leave off the front motors for a Performante variant, perhaps. Or maybe in-wheel electric motors become the norm in the future. It's too early to tell; for now, this concept is AWD.
The body is carbon fiber, and this concept previews some technology that can detect degradation in the carbon fiber early and potentially repair it with something called "nano-channels" utilizing a form of the technology that allows carbon composite materials to store energy. This technology deserves more explanation than we have room for here, and we'll get a deep dive on it as soon as we can.
Lastly, Lamborghini wants a vehicle like the Terzo Millennio to sound like a Lamborghini. That won't be easy, since there's no V12 or V10 to be found. Frankly, the company doesn't even sound like they've figured out how to solve that particular puzzle, stating that "[a] deep investigation is needed" to determine what it should sound like. That's no easy task, and we suspect the engineers will be busy on that one.
Will you see a Terzo Millennio out on the road? Almost certainly not; this is a pure concept that shows the direction that Lamborghini will take in the future. At least we finally know where Sant'Agata's head is at in terms of post-internal combustion powertrain tech.
Wireless chip manufacturer Broadcom has officially bid to buy its rival Qualcomm in a deal worth up to $130 billion. If successful, it would become the richest tech acquisition in history, nearly doubling Dell's $67 billion purchase of EMC in 2015. Rumors of the acquisition started popping up yesterday, and the offer will likely kick off a pitched battle between the two companies and regulators around the world.
Qualcomm has been particularly vulnerable lately, having been fined in Korea and Taiwan for breaching anti-trust rules, and fighting battles with Apple and other firms over wireless technology patents. The company has even gone so far as to file a lawsuit seeking to ban sales of the iPhone in China. As a result of all that, its stock has fallen steeply, making it a much better bargain than it was just a year ago. Broadcomm's offer now represents a 30 percent premium over the current stock price.
"We would not make this offer if we were not confident that our common global customers would embrace the proposed combination," said Broadcom CEO Hock Tan. "Our proposal provides Qualcomm stockholders with a substantial and immediate premium in cash for their shares, as well as the opportunity to participate in the upside potential of the combined company."
We would not make this offer if we were not confident that our common global customers would embrace the proposed combination.
Qualcomm is not that thrilled about the unsolicited offer, however, according to Bloomberg and other sources. It will reportedly recommend that shareholders reject it, saying that it could get held up in regulatory purgatory, negating any potential stock gains.
Broadcom itself was recently acquired by Singapore-based Avago for $37 billion. The combined company, now called Broadcom, made a much-publicized announcement with Donald Trump to move its home base to Delaware (the company has joint headquarters in San Diego, California and Singapore).
Both companies currently supply Apple with wireless chips and patent licenses, and the deal would make Broadcom the world's third-largest chipmaker after Intel and Samsung. If successful, Broadcom/Qualcomm would have a virtual monopoly and vast patent portfolio on wireless chips and technology. That could have a sharp effect on the supply and cost of WiFi and cell modem chips, so US regulators might have serious concerns. In other words, don't expect this deal to close anytime in the near future.
Via: Bloomberg
Source: Broadcom
Today we have an announcement out of left field. Intel has formally revealed it has been working on a new series of processors that combine its high-performance x86 cores with AMD Radeon Graphics into the same processor package using Intel’s own EMIB multi-die technology. If that wasn’t enough, Intel also announced that it is bundling the design with the latest high-bandwidth memory, HBM2.
Intel announced its EMIB technology over the last twelve months, with the core theme being the ability to put multiple and different silicon dies onto the same package at a much higher bandwidth than a standard multi-chip package but at a much lower cost than using a silicon interposer. At Intel’s Manufacturing Day earlier this year, they even produced a slide (above) showcasing what might be possible: a processor package with the x86 cores made on one technology, the graphics made in another, perhaps different IO and memory or wireless technologies too. With EMIB, processor design can become a large game of Lego.
EMIB came to market with the latest Intel Altera FPGAs. By embedding the EMIB required silicon design into the main FPGA and each of the chipsets, the goal was to add multiple memory blocks as well as data transfer blocks in a mix and match scenario, allowing large customers to have the design tailored to what they require. The benefits of EMIB were clear, without the drawbacks of standard MCP design or the cost of interposers: it would also allow a design to go beyond the monolithic reticle limit of standard lithography processes. It was always expected that EMIB would have to find its way into the general processor market, as we start to see high-end server offerings approaching 900 mm2 over multiple silicon dies in a single package.
Since the EMIB announcements, Intel’s Manufacturing Day, and Hot Chips, word has been circulating about how Intel is going to approach this from a consumer stand point. As part of the requirements of Intel’s own integrated graphics solutions, a 2011 cross-licensing deal with NVIDIA was in place – this deal was set to expire from April 1st 2017, and no mention of extending that deal was ever made public. A couple of rumors floated around that Intel were set to make a deal with AMD instead, as despite their x86 rivalry they were a preferred partner in these matters. Numerous outlets with connections in both AMD and Intel had difficulty prizing any information out. Historically Intel refuses to comment on such matters in advance. Other potential leaks include published benchmarks over at SiSoft, although nothing has been made concrete until today.
Breaking News, More to Follow...
The interstellar battle between the terran, zerg, and protoss is about to consume even more of the galaxy, as Blizzard Entertainment today announced that the ultimate real-time strategy game will soon be free to play.
Beginning November 14, players around the world will get free access to the full award-winning Wings of Liberty campaign and the elite multiplayer competition of StarCraft II's ranked ladder, including all the latest units and balance updates through the latest release in the series, Legacy of the Void. Players will also enjoy expanded access to StarCraft II's highly popular co-op mode, with all Co-op Commanders playable for free.†
"StarCraft II is one of the highest-rated PC games of all time, and we're excited to give even more people around the world a chance to find out why it has inspired such a passionate global community," said Mike Morhaime, CEO and cofounder of Blizzard Entertainment. "With the massive Wings of Liberty single-player campaign, endlessly replayable co-op mode, prestigious ranked ladder, comprehensive map-making tools, and more, StarCraft II now delivers the ultimate real-time strategy experience completely free."
Those who wish to experience later chapters in the StarCraft II story can purchase the additional single-player campaigns, Heart of the Swarm, Legacy of the Void, and Nova Covert Ops, individually ($14.99 USD each) or together in the Campaign Collection ($39.99) through the online Blizzard Shop. For a limited time, players who already own Wings of Liberty (as of October 31, 2017) but have not yet purchased Heart of the Swarm will be able to get the latter campaign free by logging in to the Blizzard Battle.net desktop app and claiming their free Heart of the Swarm gift between November 8 and December 8, 2017.
As a thank-you to the dedicated StarCraft II player community, those who already own any version of StarCraft II prior to October 31, 2017 will receive a special Eidolon Ghost skin and three portraits commemorating their founder's status in the game after StarCraft II goes free starting on November 14.
StarCraft II is available today for Windows and Mac PCs fully localized into English, Brazilian Portuguese, European and Latin American Spanish, French, German, Italian, Polish, Russian, Korean, and simplified and traditional Chinese. Learn more about the game at www.starcraft2.com .
† Access to ranked ladder granted after earning first 10 wins of the day in Unranked mode or Versus A.I. mode, or upon purchase of any campaign, the Campaign Collection, or the War Chest. Co-op Commanders available for play free of charge through level five. Purchase required to advance a Co-op Commander beyond level five-with the exception of Raynor, Kerrigan, and Artanis, which are free to play up to and beyond level five.
Reports are coming in that Amazon registered three new domains on Tuesday. Normally, this wouldn't raise any eyebrows at all. However, the domain names are quite unique as they're related to cryptocurrency apparently. The domains are amazonethereum.com, amazoncryptocurrency.com, and amazoncryptocurrencies.com. According to the registration information taken from the Whois database, all three are registered to Amazon Technologies, Inc., which we all know is a subsidiary of Amazon.com, Inc.
At the moment of this article, we're not sure what is the reasoning behind Amazon's move. Speculations are saying that maybe the tech giant is finally getting into the cryptocurrency business. Or perhaps it's simply a marketing strategy to protect the Amazon brand similar to when the company registered amazonbitcoin.com back in 2013, which redirects users to the Amazon front page. Some are considering it an indication that Amazon might start accepting cryptocurrencies, like bitcoin despite Amazon Pay's VP Patrick Gauthier telling CNBC last month that there were no immediate plans to accept cryptocurrency.
Source: CNBCRead more of this story at Slashdot.
Manu_PT said:
Game type? You talk like pubg is the ultimate cpu bound game or what? You even know what a raid on WoW is? Go figure.
Pubg is just an unoptimized crap that will die as soon as the big sharks enter the battle royale scene. Fortnite already stole a lot of players and it isnt even a good game. We talk in 1 year (battlefield BR, cod br, blizzard br etc etc)
Well Fortnite has graphics that are damn near equivalent to modded Minecraft, also made by the same studio who made the engine for the game. Bluehole originally had to do some work arounds to limitations of UE4, since it is not meant for that type of game. But they took a chance because it was a non-proprietary engine with a ton of documentations, so anything they needed to do could be done relatively quickly. Epic Games has made a ton of changes to UE4 to accomodate for the demands of a battle royal game when they made fortnite. Bluehole will get those changes when the official update to the engine is sent out to all studios using it.
Of course if a AAA studio dives into battle royal genre they will likely beat out pubg, but those AAA studios also have far more talent, and resources at their disposal. PUBG started from a studio with less than 40 developers. It expanded to over 100 within 6 months of it being in early access and has made leaps and bounds in client performance.
Pubg isnt that unoptimized, the only people i see even mentioning those words these days also have computers from what is considered now as the stone age, an expect devs to bend over backwards for them and their dual core rigs with GPUs from 5 years ago.
Again this game is early access. Only has about a year and a half of development time under its belt. It has improved far more in such short amount of time in comparison to H1Z1 that has been in early access for 4 years and they have made damn near zero progress on that particular title.
Simply put, your expectations for an unfinished product is stupidly high.
Feral Interactive has now published the system requirements for
their Linux port of F1 2017with this racing game set to be released on Thursday.
As previously mentioned,
this racing game is using Vulkanand in fact
exclusively uses Vulkan(ala no OpenGL).
Feral has taken to
Redditthis morning to confirm the final system requirements for the Linux port:
To play F1 2017, you will need a 3.3Ghz Intel Core i3-3225 processor running Ubuntu 17.04, 4GB RAM and a 2GB Nvidia 680 or 3rd Generation AMD Graphics Core Next (Volcanic Islands) graphics card or better*.
For best performance, we recommend a 3.5Ghz Intel Core i5-6600K, 8GB RAM and an 8GB Nvidia 1070 graphics card or better.
*Intel graphics cards are not supported. Nvidia graphics cards require drivers 384.90 or later. AMD graphics cards require Mesa drivers 17.2.2.
So basically a GeForce GTX 680 with the NVIDIA binary blob or any AMD GCN 1.2 Volcanic Islands GPU or better, basically those cards by default using AMDGPU DRM, like the Radeon R9 285, and newer. But Feral recommends the GeForce GTX 1070 or better for best performance.
The important bit for Radeon RADV users is being with Mesa 17.2.2 or newer. It doesn't appear AMDGPU-PRO is supported.
Under Windows, F1 2017 needs a GTX 460 / HD 5870 or better but recommends a GTX 1070 / RX 480. So the Linux requirements for F1 2017 are more stiff than under Windows and does rely upon Vulkan rather than D3D11.
We will be having plenty of Vulkan F1 2017 Linux NVIDIA/Radeon benchmarks on launch day.
My VM is lighter (and safer) than your container Manco et al., SOSP’17
Can we have the improved isolation of VMs, with the efficiency of containers? In today’s paper choice the authors investigate the boundaries of Xen-based VM performance. They find and eliminate bottlenecks when launching large numbers of lightweight VMs (both unikernels and minimal Linux VMs). The resulting system is called LightVM and with a minimal unikernel image, it’s possible to boot a VM in 4ms. For comparison, fork/exec on Linux takes approximately 1ms. On the same system, Docker containers start in about 150ms.
These results are obtained when the LightVM guest is a unikernel. You’re probably only going to create a unikernel in specialised cases. (One interesting such case in the paper is a Micropython-based unikernel that can be used to support serverless function execution). The authors also create an automated build system called TinyX for creating minimalistic Linux VM images targeted at running a single application. If we look at boot times for a TinyX VM as compared to Docker, the performance is very close up to about 250 VMs/containers per core.
Beyond that point, Docker starts to edge it, since even the idle minimal Linux distribution created by TinyX does run some occasional background tasks.
As the following chart shows, the biggest single factor limiting the scalability and performance of virtualisation is the size of the guest VMs. To produce that chart, a unikernel VM was booted from ramdisk, with varying sizes of binary objects injected into the uncompressed image file. So all the effects are due to image size.
So if we want fast booting, we know that image size is going to matter. We’ve looked at unikernels on The Morning Paper before, and they give you the smallest possible guest image. In this paper, the authors use Mini-OS to create a variety of unikernels, including the ‘daytime’ unikernel implementing a TCP servic that returns the current time. This is 480KB uncompressed, and runs in 3.6MB of RAM. This unikernel is used to test the lower bound of memory consumption for possible VMs.
Making your own unikernel image based on Mini-OS is probably more work than many people are prepared to do though, so the authors also created Tinyx.
Tinyx is an automated build system that creates minimalistic Linux VM images targeted at running a single application. The tool builds, in essence, a VM consisting of a minimalistic, Linux-based distributed along with an optimized Linux kernel. It provides a middle point between a highly specialized unikernel, which has the best performance but requires porting of applications to a minimalistic OS, and a full-fledged general-purpose OS VM that supports a large number of applications out of the box but incurs performance overheads.
Tinyx creates kernel images that are half the size of typical Debian kernels, and have significantly smaller runtime memory usage (1.6MB for Tinyx vs 8MB for Debian).
Using the small VM images thus obtained, we can probe the behaviour of Xen itself when launching lots of VMs. When launching 1000 guests, here are the boot and create times for Debian minimal install), Tinyx, MiniOS (unikernel) and for comparison on the same hardware: Docker containers and simple process creation.
As we keep creating VMs, the creation time increases noticeably (note the logarithmic scale): it takes 42s, 10s and 700ms to create the thousandth Debian, Tinyx, and unikernel guest, respectively.
As the size of the VM decreases, the creation time is responsible for ever larger portions of the overall time taken to get to availability. To understand where all the time was going, the team instrumented Xen to reveal this picture:
XenStore interaction and device creation dominate. Of these, the device creation overhead is fairly constant, but the XenStore overhead grows superlinearly.
Our target is to achieve VM boot times comparable to process startup times. Xen has not been engineered for this objective, as the results in the previous section show, and the root of these problems is deeper than just inefficient code. For instance, one fundamental problem with the XenStore is its centralized, filesystem-like API which is simply too slow for use during VM creation and boot, requiring tens of interrupts and privilege domain crossings.
I bet it was hard to conceive of anyone launching 1000 guest VMs when that design was first created!
LightVM redesigns the Xen control plane with a lean driver called noxs (for ‘no XenStore’) that replaces the XenStore and allows direct communication between front-end and back-end drivers via shared memory.
LightVM also keeps on hand a pool of pre-prepared VM shells, through which all the processing common to all VMs is done in the background. When a VM creation command is issued, a suitable shell fitting the VM requirements is taken from the pool and only the final initialisation steps such as loading the kernel image into memory and finalising device initialisation need to be done.
Device creation in standard Xen ends up calling bash scripts, which is a slow process. LightVM replaces this with a binary daemon that executes a pre-defined setup with no forking or bash scripts.
We saw the boot times for LightVM with a variety of images at the start of this post. Furthermore, LightVM can save a VM in around 30ms, and restore it in 20ms. Standard Xen needs 128ms and 550ms respectively.
Unikernel memory usage is fairly close to Docker containers. Tinyx needs more, but only 22GB more across 1000 guests. That’s a small fraction of the RAM of current servers.
CPU usage for VMs can also be on a par with containers, so long as the VMs are trimmed to include only the necessary functionality:
The authors present four different use cases where LightVM + lightweight VMs can shine.
In all the following scenarios, using containers would help performance but weaken isolation, while using full-blown VMs would provide the same isolation as lightweight VMs, but with poorer performance.
The use cases we presented show that there is a real need for lightweight virtualization, and that it is possible to simulataneously achieve both good isolation and performance on par or better than containers.
A recent post on the Google Cloud Platform blog, ‘Demystifying container vs VM-based security: security in plaintext’ provides an interesting perspective on container security and isolation, from a company that have been running a container-based infrastructure for a very long time.
Google is getting serious about its role as a travel service. Come April 10, 2018 the search giant will shut down access to its QPX Express API that's used by the likes of Kayak and Orbitz for airfare data, as spotted by Hacker News. This could seriously affect those sites' customers. Travel listings are one thing, but if you've stored a ton of reward points with one and it isn't able to fetch flight times and prices for your a trip next summer, that could cause some headaches.
As The Verge and TechCrunch note, there are data alternatives for travel websites including Fareportal and Skypicker. TechCrunch reports that Google is also closing down ITA's flight-search app as of next month. Mountain View recently launched its Trips app as a way to gather all your travel data from Gmail and present it in a mobile-friendly way, so yeah, you could say this isn't a passing interest for the company. Why? Well, mining travel data will give it another avenue for pushing ads your way.
At the same time, it'll push people toward using Google's own flight searches and tracking; it's a move that's good for Google and basically no one else. Five years ago, Google bought ITA Software, the company behind an airfare search system known as QPX, which is used by Bing Travel and CheapTickets, and airlines including American and United. At the time the Department of Justice approved the acquisition under the condition that Google would keep the API public for five years. That five years is now over.
Via: Hacker News, TechCrunch
Source: Google