Friday, 30 June 2017

Blueprints from Hell: Share The Horror

Sorry for being slowpokes, but we’ve just discovered the treasury of the most horrific UE4 Blueprints.

Having a nice Blueprint setup that is clear, understandable and easy on the eyes is something of an unachievable goal for mere UE4 users. Yes, we see it in the images, tutorials, and other videos, but we never actually achieve it. What we do achieve is a set of “Blueprints from Hell”. And now there’s a website for that.

If you ever feel like your horrible Blueprints have matured enough to be shared with the world, please submit your doodles to the “Blueprints from Hell” – a wonderful community blog that features screenshots of some of the worst Blueprint setups ever.


© kikundra for 80lvl, 2017. | Permalink | No comment | Add to del.icio.us
Post tags: , , ,

Feed enhanced by Better Feed from Ozh



Read the full article here by 80lvl

F1 | Facebook: opportunità streaming per Liberty Media

Siamo quasi a metà del primo campionato targato Liberty: si è vista qualche novità, ma il grosso resta ancora da...

Read the full article here by FormulaPassion.it

A New Mali-400 Open-Source Graphics Driver Is In Development

An open-source contributor has begun working on a new open-source driver for the ARM Mali-400 GPU.

Qiang Yu, who appears to still

works for AMD

, has begun working on an open-source Mali-400 graphics driver. So far this Mesa Gallium3D driver can run a simple OpenGL triangle draw application while he's now working on the shader compiler bits.

This new Mali-400 Gallium3D driver isn't based directly on the past work of Luc Verhaegen's Lima driver but is reusing the hardware information learned from the past efforts and its hilbert render curve algorithm. This new driver consists of the Gallium3D driver, the libdrm support, and a new kernel DRM driver.

This new driver was announced a few days ago on

the Lima IRC channel

. This Mali-400 OpenGL driver in development can be found via

GitHub

. It will be interesting to see where this new Mali-400 open-source driver leads and if it ends up advancing enough by this developer to eventually see it mainlined in the future.



Read the full article here by Phoronix

Dhalion: self-regulating stream processing in Heron

Dhalion: Self-regulating stream processing in Heron Floratou et al., VLDB 2017

Dhalion follows on nicely from yesterday’s paper looking at the modular architecture of Heron, and aims to reduce the “complexity of configuring, managing, and deploying” streaming applications. In particular, streaming applications deployed as Heron topologies, although the authors are keen to point out the principles could be applied in other engines too.

Dhalion is a system that essentially allows stream processing frameworks to become self-regulating. Dhalion has been implemented and evaluated on top of Twitter Heron and we are in the process of releasing it to open-source as a contributed to the Heron code base. However, its architecture and basic abstractions are also applicable to other streaming engines as well.

And does self-regulating mean exactly? Floratou et al. break it down into three components:

  • The system should be able to self-tune: given a topology and a desired SLO, it should be able to automatically tune configuration parameters to achieve the stated objective.
  • The system should then be able to self-stabilise to continue to maintain that SLO in the face of changing workload (e.g., a sudden surge of tweets). (This functionality is more commonly referred to as auto-scaling).
  • The system should be able to detect service degradations (gray failures?), diagnose the internal faults causing them, and take actions to recover from them. (Self-healing).

Let’s pause for a moment and think about what might be necessary to achieve those goals. Clearly there has to be some kind of monitoring to observe the system status (perhaps from multiple perspectives), with alerts/events generated when an abnormal situation is detected or SLOs are not being met. Then we’ll need a way to map from alert conditions back to causes, and finally given a determined cause or cause(s) we’ll need a way to take corrective action to address the situation. While we’re doing all this, we’ll need to be careful about feedback loops too in case our interventions themselves cause the system to become unstable or to oscillate between configurations degrading performance.

The first three of those requirements are satisfied by Dhalion’s symptom detectors, diagnosers, and resolvers. Dhalion has only rudimentary mechanisms to control feedback though: actions shown not to help can be blacklisted (“if it hurts, stop doing it!”), and after taking an action, the system waits for a reasonable period of time before considering any further interventions. In the evaluation, these two measures seem to be enough to exhibit desirable behaviours, but it would have been nice to have some reference to control theory, which is a whole branch of science dedicated to achieving stability around a setpoint.

Symptom detectors collect metrics from the underlying streaming system (for example, tuple processing rates, and number of packets pending). From these metrics Dhalion looks for symptoms that may be evidence of a problem – for example, is the system having to apply backpressure, or is there processing skew across the tasks of a particular pipeline stage? A symptom description contains a compact representation of the symptom together with the metric values used to identify it.

Detected symptoms are passed to the diagnosers which seek to find explanations for the symptoms. Such

For example, the existence of backpressure can be attributed to various reasons such as resource underprovisioning at a particular stage, slow hosts/machines or data skew. Dhalion produces all the possible diagnoses that can explain the observed symptoms.

In the examples given in the paper, the diagnosers all look to be hand-coded rules, but you could in theory plug in machine learned classifiers or other such trickery.

Given a set of (possible) diagnoses, resolver selection explores possible actions to resolve the situation, which are then carried out by the resolvers. Typically there is a one-to-one mapping between diagnoses and resolvers. Major topology changes such as scaling up and down resources or restarting containers are typically invoked through the Heron Scheduler component.

… after every action is performed, Dhalion evaluates whether the action was able to resolve the problem or brought the system to a healthier state. If an action does not produce the expected outcome then it is blacklisted and it is not repeated again.

(Strictly, Dhalion tracks the ratio of unsuccessful interventions to invocations for a given action and diagnosis, and blacklists the action for that diagnosis when the ratio crosses a given threshold).

Let’s see how this all comes together for a couple of use cases: dynamic resource provisioning, and SLO maintenance.

Dynamic resource provisioning (autoscaling)

The Dhalion Dynamic Resource Provisioning Policy tries to maximise throughput while also avoiding under-utilisation. There are three different symptom detectors, four diagnosers, and four possible resolution.

Detectors

  • The pending packets detector monitors the number of pending packets in Heron instance queues for bolts. It looks to see whether all of the instances have a similar queue size or whether there are outliers.
  • The backpressure detector generates a symptom description whenever backpressure is being applied. This description includes the bolt that is the source of the backpressure and the amount of time input data consumption was suspended during the 300 second measurement period.
  • The processing rate skew detector looks for mismatches in the number of tuples processed by each Heron instance.

Diagnosers.

  • The resource overprovisioning diagnoser looks at the symptoms from the pending packets and backpressure detectors and considers. If there is no backpressure and the average number of pending packets across instances of a bolt is almost zero, then it is possible the resources assigned to the bolt are over-provisioned.
  • The resource underprovisioning diagnoser considers whether observed backpressure could be the result of of underprovisioned resources. It attributes the cause when all the instances of a bolt have similar processing rates and queue sizes.
  • The slow instance diagnoser attributes observed backpressure to a slow instance when the instances initiating backpressure have a much higher number of pending packets than their peers and the similar processing rates. (The processing rates are similar because all instances operate at the speed of the slow instance).
  • The data skew diagnoser attributes observed backpressure to data skew when the instances initiating backpressure have a higher processing rate and a higher number of pending packets than their peers.

Resolution

  • If a resource overprovisioning diagnosis is made, then the bolt scale down resolver decreases the number of Heron instances associated with the bolt. The scale down factor is a configuration option. If the action results in backpressure the operation will be blacklisted and a subsequent scale up operation will bring the topology back to a healthy state.
  • The restart instance resolver moves slow instances to new containers (i.e., kills the slow one and restarts it?).
  • The data skew resolver adjusts the hash function used to distributed data to the bolts.
  • The scale up resolver scales up the resources dedicated to a bolt initiating an underprovisioning diagnosis. “To determine the scale up factor, the Resolver computes the percentage of the total amount of time that the Heron Instances spent suspending the input data over the amount of time where backpressure was not observed.”

Here’s an example of dynamic resource provisioning at work while the workload is manipulated:

SLO maintenance

We observe that in a large number of streaming applications, users spend a significant amount of time tuning the topology to meet the requirement of a throughput above a certain threshold.

In their experiment, the authors simply submit a topology with a single Heron instance provisioned for each spout and bolt, and provide an SLO asking for at least 4 million tuples a minute at steady state. Dhalion is able to figure out a configuration that meets this objective as you can see in the figure below – although it does take about 70 minutes to do so!

The Dynamic Resource Provisioning Policy that we previously presented, assumes that the input data rate is given and attempts to allocate resources so that the system can handle the input data load as it varies over time. The Throughput SLO Policy goes a step further by attempting to adjust the input data rate by increasing the number of Heron Instances that belong to the spouts, in order to meet the performance SLO.

It uses an additional symptom detector, diagnoser, and resolver:

  • The emit count detector computes the total rate at which a spout emits data and forwards it to the throughput SLO violation diagnoser.
  • The diagnoser checks whether the topology is in a healthy state. If it is healthy, and the throughput doesn’t meet the required SLO, it emits a diagnosis that is forwarded the spout scale up resolver.
  • You can probably guess what the spout scale up resolver does: it scales up the resources associated with the spout.

The last word

Note that although Dhalion has been implemented on top of Heron, its architecture and basic policy abstractions can be adopted by other streaming engines as long as they provide a metrics collection API and potentially a scaling API.



Read the full article here by the morning paper

Wednesday, 28 June 2017

This Cell Phone Can Make Calls Even Without a Battery


"Hello, hello. I am calling from a battery-free phone." Vamsi Talla's words in a cluttered lab at the University of Washington in Seattle are barely audible through pops and static. But the fact they can be heard at all, on a nearby Android smartphone, is revolutionary, because Talla's own cell phone has no battery at all. It draws what little power it needs from thin air.

The prototype cell phone is the culmination of a years-long quest by Talla, a research associate at the lab of Joshua Smith, who researches computer science and electrical engineering at UW. "If you had to pick one device to make battery-free, what would you pick," asks Smith. "A cell phone is one of the most useful objects there is. Now imagine if your battery ran out and you could still send texts and make calls."

Realizing that vision required rethinking almost everything about how cell phones function today. In order to operate without a battery, the phone would have to rely only on energy that it could harvest from its surroundings.

Ambient light can be turned into a trickle of electricity with solar panels or photodiodes. Radio-frequency TV and Wi-Fi broadcasts can be converted into energy using an antenna. A hybrid system using both technologies might generate a few tens of microwatts. The problem is that a traditional cell phone uses tens of thousands of times more power, around 800 milliwatts, when making a call.

Power Up

The first thing the team tackled was communication. Smith's lab developed a technique called backscatter that allows a device to communicate by reflecting incoming radio waves, a bit like an injured hiker sending an SOS using the sun and a mirror. Smith has already spun out a start-up called Jeeva Wireless to commercialize what he calls "passive Wi-Fi"—digital backscatter technology for ultra-low power Wi-Fi gadgets. However, even passive Wi-Fi proved too power-hungry for the cell phone project.

"Converting analog human speech to digital signals consumes a lot of power," says Talla. "If you can communicate using analog technology, you're actually more power efficient." So although the cell phone uses digital signals to dial numbers, the backscatter process for voice calls is purely analog.

While developing analog backscatter, Smith realized that he was essentially re-inventing a spy technology used during the Cold War. In 1945, the Soviets presented the American Ambassador in Moscow with a carving of the Great Seal of United States. Inside was hidden an audio bug that only activated when illuminated by the correct frequency of radio waves, using the energy of the waves themselves to operate.

"My dad was a spy in the Cold War, so I heard stories about the Great Seal bug when I was a kid," says Smith. "I wondered if analog backscatter could be software-controlled and turned from a curiosity for spooks into a technology that everyone could use."

Like the bug, some key components of Talla's phone are housed remotely to save power. A nearby basestation has circuitry for converting and connecting to the digital cellular network, currently via Skype. The prototype basestation uses an unlicensed frequency, limited to low-power transmissions. Because the phone relies on those signals for its energy harvesting, it has a range of just 15 meters from the basestation.

To develop the phone commercially, that circuitry could be built into a Wi-Fi router at home or, more likely, a traditional phone tower. "Real cell towers have a hundred times as much power, and would increase the range to perhaps a kilometer," says Talla.

Watts Happening

There is still a long way to go before that happens. The phone has a basic touch-sensitive number pad and its only display is a tiny red LED that glows briefly when a key is pressed. A large touchscreen would require around 400 milliwatts—over one hundred thousand times as much as power as Talla's phone currently needs.

Most importantly, voice calls are still awkward. You have to press a button, walky-talky style, to switch between listening and talking, and sustaining a conversation through clouds of static is near impossible.

Talla promises better call quality and an E-Ink display for text messages on the next generation device, possibly along with a camera to snap selfies. Smith says that even as the prototype stands, built from off-the-shelf components, it is much cheaper than a normal phone. Built at scale, it would be cheaper still. And better yet, you would never have to worry about leaving your charger at home ever again.



Read the full article here by Wired Top Stories

Tuesday, 27 June 2017

Petya Ransomware Outbreak Sweeps Europe


A type of ransomware known as Petya and Petrwrap began spreading internationally on Tuesday. Reported victims so far include Ukrainian infrastructure like power companies, airports, public transit, and the central bank, as well as Danish shipping company Maersk, the Russian oil giant Rosnoft, and institutions in India, Spain, France, the United Kingdom and beyond.

What makes the rapid escalation of Petya both surprising and alarming is its similarity to the recent worldwide WannaCry ransomware crisis, primarily in its use of NSA exploit EternalBlue to spread through networks.

"It is definitely using EternalBlue to spread," says Fabian Wosar, a security researcher at the defense firm Emsisoft that specializes in malware and ransomware. "I confirm, this is a WannaCry situation," Matthieu Suiche, the founder of security firm Comae Technologies, wrote on Twitter.

Microsoft had patched the EternalBlue vulnerability in March, prior WannaCry's spread in May, which protected some systems from the infection. Based on the extent of damage Petya has caused so far, though, it appears that many companies have put off a patch despite the clear and potentially devastating threat of a similar ransomware spread. These systems apparently remain vulnerable even after Microsoft released multiple patches for legacy systems, like Windows XP, that the company no longer supports. And publicity about the attack led many system administrators to prioritize upgrading their systems for defense.

But Petya's spread using EternalBlue shows how dire the patching landscape really is. McAfee fellow and chief scientist Raj Samani notes that Petya may also use other propagation methods as well, for maximum impact.

No Kill Switch

The Petya ransomware itself has circulated since 2016; its spread has just now hastened thanks to EternalBlue. It has two components: The main malware infects a computer's master boot record, and then attempts to encrypt its master file table. If it can't detect the MFT, though, it turns operations over to its other component, a ransomware that Petya incorporates called Mischa, and simply encrypts all the files on the computer's hard drive the way most ransomware does.

In either case, once infected a computer displays a black screen with red text that reads, "If you see this text, then your files are no longer accessible, because they have been encrypted. Perhaps you are busy looking for a way to recover your files, but don't waste your time. Nobody can recover your files without our decryption service." Then the ransomware asks for $300-worth of bitcoin–the same amount WannaCry demanded.

It's not yet clear where the wave of attacks originated or who is behind it. "Everyone talked about Ukraine first, but I don't know [which country is hardest hit]. It's worldwide," says MalwareHunterteam, a researcher with the MalwareHunterTeam analysis group.

Most troubling, perhaps, is that Petya doesn't appear suffer the same errors that stunted WannaCry's spread. The amateurish mistakes that marked that outbreak limited both the scope and the eventual payouts collected; it even included a "kill switch" that shut it off entirely after just a couple of days.

The only potential good news? Enough people may have patched since WannaCry to forestall a breakout on the same scale.

"I think the outbreak is smaller than WannaCry, but the volume is still quite considerable," says Raj Samani, McAfee fellow and chief scientist. "This is particularly nasty, it’s not as widespread, but it’s certainly quite significant."

So far, this round of Petya attacks has netted 1.5 bitcoin, or around $3500. That may not seem like much so far, but the number has steadily increased since the first reports broke this morning. It also has no kill switch function–which means there's no way yet to stop it.



Read the full article here by Wired Top Stories

Saturday, 24 June 2017

Microsoft Claims 'No Known Ransomware' Runs on Windows 10 S. Researcher Says 'Hold My Beer'

Earlier this month, Microsoft said "no Windows 10 customers were known to be compromised by the recent WannaCry (WannaCrypt) global cyberattack," adding that "no known ransomware works against Windows 10 S." News outlet ZDNet asked a security researcher to see how good Microsoft's claims were. Turns out, not much. From the report: We asked Matthew Hickey, a security researcher and co-founder of cybersecurity firm Hacker House, a simple enough question: Will ransomware install on this operating system? It took him a little over three hours to bust the operating system's various layers of security, but he got there. "I'm honestly surprised it was this easy," he said in a call after his attack. "When I looked at the branding and the marketing for the new operating system, I thought they had further enhanced it. I would've wanted more restrictions on trying to run privileged processes instead of it being such a short process."
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot

Thursday, 22 June 2017

Citrix isn’t just for telecommuting, Red Bull Racing uses it at the track

Enlarge /

Daniel Ricciardo of Red Bull Racing prepares for the 2017 Australian Grand Prix.

reader comments 11

"Big Data" has been all the rage for the last few years. But the sport of Formula 1 racing caught that bug a long time ago, certainly in the days predating that buzzword. In the past, we've taken a look at how teams like Williams Martini Racing, Renault Sport Formula One, and Caterham F1 (RIP) have handled collecting and crunching their terabytes. Today, it's Red Bull Racing's turn.

"I've worked for the team for 13 years now, and we've been doing this for ages. The complexity of what we measure and sophistication of the analytics continues to improve, but we've been doing big data for a long time," explained Matt Cadieux, Red Bull Racing's chief information officer. The data in question is collected by myriad sensors all over the team's race cars, roughly adding up to a terabyte each race weekend (500GB for each of the two cars).

"But if you look at all the other data we use—video, audio, number crunching to run through various simulations—it's a huge multiplication factor on top of that," he told Ars. Cadieux wouldn't give us an exact number for that data volume over a race weekend, lest that information prove too useful to the team's rivals in the paddock, but company-wide the team manages 8PB of data. Cadieux reckoned that 95 percent of that was related to car design and car performance—think CAD (computer-aided design) and CFD (computational fluid dynamics), but also strategy simulations and historical telemetry data from previous seasons. "We have a very data-hungry business," he said.

That data is fed back to the garage and used to inform the team's strategy during races. But it also gets fed back to the engineering department in the English town of Milton Keynes, where it's used to refine the cars throughout the season and inform the development of next year's machines.

In recent years, the FIA—the group that writes F1's rules—has been on a cost-saving kick. (It's well-meant, but the FIA ignores the fact that each team will spend every penny of its yearly budget in search of better performance at the track.) That has meant restrictions on the use of wind tunnels (which are limited to 60-percent scale) but also CFD modeling—with a maximum of 30 hours of wind tunnel time or 30 teraflops of CFD processing in a week (or a combination of both so long as the combined mix of hours and teraflops only adds up to 30). But teams are also now limited in the number of personnel they can bring to each race weekend.

Keeping HQ in the loop

The Red Bull Racing simulator building at the team factory in Milton Keynes, England.
Enlarge /

The Red Bull Racing simulator building at the team factory in Milton Keynes, England.

Getty Images for Red Bull

Like other teams we've spoken to, Red Bull supplements its personnel with data links and enterprise software to loop in engineers back at HQ. "Sixty people have operational roles at the race track, and we supplement that with a large number here in the UK. We have an operations room—branded by AT&T—that holds more than 40 people at full capacity," Cadieux told us.

For each race weekend and test session, the team sets up an MPLS (Multi-Protocol Label Switching) connection between the track and Milton Keynes that provides the operations room real-time feeds for video, audio, and voice chat. So if the team is testing a new part from the design office—a revised front wing, for example—that part can be debugged with the expert from the design team in the ops room. "He can see how the car is operating in real-time and participate in engineering debriefs that take place after every practice session or qualifying session during a race weekend," he said.

That connection back to home base also means that if the team is nursing a particular issue on a car during the race, it can have that system's expert in the loop. "They can pretty much act as if they were at the race track themselves because of the real-time connectivity," Cadieux said. Other members of the team play a more proactive role by monitoring competitors to identify weaknesses or strengths that could inform Red Bull's strategy. The platform the team uses for this may be familiar to those of you who spend your work days in remote meetings: Citrix.

"What Citrix allows us to do is look at applications that are very graphically intensive or that have big data there that you couldn't normally interact with at a remote location," Cadieux explained. That includes telemetry from the cars as well as post-processing and analysis of that car data, a service provided by servers that the team brings to each race but which is then also piped back to home base.

And data flows in the other direction, too. Since real-world testing is now highly restricted by the rulebook, the Friday practice sessions of an F1 race weekend are used by Red Bull to validate all the simulations it runs during the week. "Guys can upload CAD big assemblies, rotate them, cut big sections through, and suggest changes for the future. With CFD, they can open up the animations and stream videos with more or less the same performance as if they were back at HQ," he told us.

Got to keep it safe

The Red Bull Racing simulator, taken in 2010.
Enlarge /

The Red Bull Racing simulator, taken in 2010.

Getty Images for Red Bull

Obviously, all of this data needs to be well-protected. Cadieux pointed out that F1 is an incestuous industry, with most of the grid operating from bases within 50 miles of each other in the UK's so-called "carbon-fiber triangle." Teams have relatively flat organizational structures, and Cadieux pointed out that an easy way to get promoted is to go to another team, data in hand. (Most famously, a Ferrari engineer was caught passing documents to McLaren in 2007, but it has happened before—in 2004, Toyota was accused of making use of stolen Ferrari design plans.)

But Cadieux said that the need for data-access controls has to be balanced by a necessity not to slow the pace at which the team works or to make the work environment too unpleasant. External threats exist, too, both from other teams but also third parties looking for a ransomware payout. "Because we're high-profile, we tend to be a target," he said.

But perhaps the most fascinating aspect of Red Bull's informatics work is one that Cadieux couldn't tell us much about: machine learning and AI. "We're exploring them to make better simulations and better decisions at the races," he said. "It's early days; we're working with several partners right now, but it's the very beginning of a journey. But I've been in IT for almost 30 years, and there's more pace of change now than ever before. It's both scary and a huge opportunity."



Read the full article here by Ars Technica

Guy Invents a Self-Driving Potato That Acts Like a Cat

Marek Baczynski shows off and demonstrates his latest creation, a self-driving potato, which he's named Pontus, that has a cat-like personality...(Read...)



Read the full article here by Likecool

AMD EPYC Event in Case You Missed It

AMD has put up a video of the the entire EPYC Launch Event that we attended yesterday in Austin, TX. We have an editorial published here if you care to give that a read along with pictures of all the 55 relevant slides if you are in a non-video friendly environment. Welcome to the start of a new era for the datacenter with the launch of AMD EPYC. Footage taken from the launch event in Austin, Texas on June 20th, 2017. Ongoing Discussion

Read the full article here by [H]ardOCP News/Article Feed

Wednesday, 21 June 2017

Capturing Game Levels with Photogrametry in 3D

Thomas Kole talked about his amazing new tech, which allows capturing low poly copies of complex 3d elements with techniques, similar to photogrammetry.

I’m Thomas Kole, and together with Daan Niphuis at Force Field VR we developed a method to convert complex environments into simple 3D meshes with only one texture, in Unreal Engine 4.

Daan Niphuis is an Engine Programmer, and I’m a Technical Artist intern, though this is my last week at the company. Force Field VR is a studio in Amsterdam that came from Vanguard, known for the Halo Spartan Assault and Spartan Strike games.

Under Force Field, they produced Landfall and Terminal, for the Oculus Rift and Gear VR respectively. The system we developed looks a lot like Photogrammetry or Photoscanning, but cuts some tricky steps. The scenes you see here are from Unreal Tournament.

Generating Low Poly Models of Large Environments

When you’re making an open-word game, or a game with large-scale levels in general, showing large scale structures in the distance can often be a challenge. Usually the process involves making a manual low-poly version of the distant mesh, and somehow bake the textures to that. This system could automate that process.

Automating Process

The process is not 100% automated, but it requires very little manual steps. Essentially, and artist would set up numerous camera positions from where he’d like to capture the environment, and executes a function that starts the capturing.

He then takes the data into Meshlab, processes it, and puts it back into Unreal. Depending on the size and complexity of the scene, the process should not take more than 1 hour.

Photogrammetry

Photogrammetry works by comparing many photos, and looking for similarities. With these similarities, it can reconstruct a sparse 3D point cloud. Now it can look for even more similarities and reconstruct a dense 3D point cloud. We can skip this step, because we can extract this information per photo directly from UE4. We capture the environment from a bunch of locations, from all directions. 4 times in total per direction. This way we capture base colors, normals and world positions, which we compose into one big point cloud. We also capture a high resolution screenshot from that point, which we use to project the textures from at the end of the process.

With this Point Cloud we generate a new mesh within Meshlab. This mesh has the same shape and contour as the environment, but it’s very high-poly. This mesh is then reduces, unwrapped, and receives textures for the final model.

UV Work

UV unwrapping, sometimes called Mesh Parameterization, is always tricky, and took a large chunk of the research time. Initially, I wanted to do that process entirely in Meshlab, but it did not produce good enough results. UV’s should have large chunks, little stretching, and no overlap. Three criteria which are always conflicting. I found that Maya’s automatic unwrapping, together with their packing, works pretty good. There’s a plugin for Blender called Auto Seams Unwrap, which also produces even better patches, but it can take a long time to compute (sometimes over half an hour for a very complicated mesh). This process could be automated further by automating it with a script.

Capturing Information

In this case, we capture the final color of the scene – with lighting and all. This means that the final model can be used with an unlit shader, which is very cheap. But that does mean that all dynamic lighting is lost.

However, the system could be modified to capture Base Colors, Normals and Roughness (optional) Instead, for dynamically lit scenes.

Small lights in the environment could even be baked to an emissive texture for additional detail.

Optimization 

Of course there’s a huge loss in geometry detail when you run an environment through this pipeline. However, the final polycount is in your hands.

Everything is baked into one final texture, so small details in texture are also lost. High frequency details such as wires, chain link fences, and thin meshes can be problematic too.

For the captures of Unreal Tournament, I tried to take those out.

The one thing that the process does preserve very well is contour and shape, which is perfect for distant geometry.

Cases

There’s all sorts of uses for this technology. The most useful one would be using it for distant geometry.

But you could also use it for marketing purposes, uploading 3D action scenes of a game to a site like Sketchfab.

If you want to read some more, you can read my article on my portfolio.

Thomas Kole, Technical Artist

Interview conducted by Kirill Tokarev

Follow 80.lv on Facebook, Twitter and Instagram


© a.sergeev for 80lvl, 2017. | Permalink | No comment | Add to del.icio.us
Post tags: , , , , , , , ,

Feed enhanced by Better Feed from Ozh



Read the full article here by 80lvl

Tuesday, 20 June 2017

MRIs: Why Are They So Loud?

My dad was scheduled for his first MRI scan the other day, and as the designated family technical expert, Pop had plenty of questions for me about what to expect. I told him everything I knew about the process, having had a few myself, but after the exam he asked the first question that everyone seems to ask: “Why is that thing so damn loud?”

Sadly, I didn’t have an answer for him. I’ve asked the same question myself after my MRIs, hoping for a tech with a little more time and lot more interest in the technology he or she uses to answer me with more than the “it’s the machine that makes the noise” brush-off. Well, duh.

MRI is one of those technologies that I don’t feel I have a firm enough grasp on, and it seems like something I should really be better versed in. So I decided to delve into the innards of these modern medical marvels to see if I can answer this basic question, plus see if I can address a few more complicated questions.

Spin Doctors

Magnetic Resonance Imaging is based on the technique of nuclear magnetic resonance spectroscopy. NMR uses powerful magnets to align a chemical sample’s atomic nuclei and then tickle them RF waves, revealing structural and chemical properties of the sample under test. NMR spectroscopy has been used for decades to explore the structure of matter; almost every academic or industrial chemistry lab has access to NMR nowadays.

An MRI scanner uses the principles of NMR to map the water molecules in the body by probing for the single proton in the nucleus of hydrogen atoms. A large superconducting magnet produces a strong and stable magnetic field down the long axis of the core of the scanner. When a patient is put into the machine — fair warning to claustrophobics that this is not going to be a happy time for you — the magnetic field gets to work on the protons in the water (and fat) in the patient’s tissues. Each proton has a quantum property called spin, which is a little like the Earth’s axis. Outside of a magnetic field, each proton’s spin axis is randomly oriented, but inside the field, everything lines up. About half the protons are oriented toward the patient’s head, and about half are pointing toward the feet. The protons spinning up and those spinning down cancel each other out, but the distribution isn’t a perfect 50% — there will always be a net spin moment one way or the other. And it’s this fact that makes MRI work.

Each proton has a quantum property called spin, which is a little like the Earth spinning on its axis. Outside of a magnetic field, each proton’s spin axis is randomly oriented, but inside the field, everything snaps into alignment. A little more than half the protons are oriented toward the patient’s head, which is the low energy state, and the rest are aligned toward the feet, which is a slightly higher state and therefore less favored. The result is a slight net spin moment oriented toward the head, meaning that your body is turned into a bar magnet during the exam.

Once the protons are all lined up, a powerful pulse of RF energy is transmitted into the tissue being studied. The exact parameters depend on the study being conducted, but typically the frequency is in the 10 to 100 MHz range at a power of 10 to 30 kW. It’s akin to putting your precious self a few inches from the antenna of a shortwave radio station, which is almost never a good idea. But the RF is rapidly pulsed during the exam, which reduces the duty cycle and decreases exposure risk. But there are cases where significant heating can occur in a patient’s tissues as a result of the radio pulses, to the point where specific positions are forbidden to prevent RF loops that could lead to internal heating, and there are guidelines for reporting “heating events.” I’ve felt this myself; during my last MRI my wedding ring, which was overlooked in the pre-exam search for metal, heated up to the point where I almost asked the tech to stop the exam.

These powerful RF waves stimulate the protons that aligned in the high energy state to flip to their low energy state, releasing RF energy in the process. The amount of signal received is proportional to the number of protons, which in turn represents the amount of water in the different tissues. Of course, this is a drastic simplification of the real physics here. I’ve left out all kinds of detail, like the Larmor frequency, spin precession, relaxation, and a bunch of other stuff. But those are the basics of getting a map of the water in your body

Noisy Coils

But still: why the noise? And more importantly to me: how do we get spatial data from a single antenna? Other imaging techniques using X-rays, like CT scans, are easy to understand — a gantry moves an X-ray tube and a digital detector around your body and turns the stream of density data into a 2D-image based on the position of the beam relative to your body. But nothing moves in an MRI scanner other than the patient bed, and that stays still during the scan. How does an MRI scanner scan?

It turns out that the answers to both those questions are related to another set of magnets inside the scanner: the gradient magnets, or gradient coils. The gradient coils are essentially powerful electromagnets that are designed to slightly distort that carefully aligned, stable, powerful field running down the bore of the scanner. There are three coils located inside the main magnet, arranged to perturb the main field in three dimensions. The result is a magnetic field of varying strength whose location can be very accurately controlled in three dimensions. The scanner’s software correlates the returned RF signal to the location defined by the three gradient fields, generating the astoundingly detailed images we’ve all seen.

But what about the noise? Those gradient coils need to be pulsed very rapidly to scan the point of interest across whatever structures need to be imaged. Thanks to Lorenz forces, each one of those pulses causes the coils to deflect mechanically a bit, causing a vibration in the air. The pulses are generally in the range of a few kilohertz, well within the audio frequency range. And they can be loud, like 110 dB or more. Thinking back on my scans, I can recall an underlying periodicity to the sounds — rhythmic changes that probably correlated to how the gradient was rastering across by body. The things you notice when you turn your mind inward to avoid the panic of claustrophobia.

I’ve only scratched the surface of how MRI works here, but at least I feel like I know a little more about this technology now. It won’t make me any happier to be shoved into that noisy tube again, but at least I’ll be able to contemplate what’s going on around me to pass the time.

And by the way, my dad did fine, and thankfully they didn’t find anything wrong.


Filed under: Hackaday Columns, Interest, Medical hacks

Read the full article here by Hack a Day

Monday, 19 June 2017

DIY Shortcut Keyboard


DIY Shortcut Keyboard

Working with CAD programs involves focusing on the task at hand and keyboard shortcuts can be very handy. Most software packages allow the user to customize these shortcuts but eventually, certain complex key combination can become a distraction.

[awende] over at Sparkfun has created a Cherry MX Keyboard which incorporates all of the Autodesk Eagle Shortcuts to a single 4×4 matrix. The project exploits the Arduino Pro Mini’s ability to mimic an HID device over USB thereby enabling the DIY keyboard. Pushbuttons connected to the GPIOs are read by the Arduino and corresponding shortcut key presses are sent to the host machine.

Additional functionality is implemented using two rotary encoders and the Teensy encoder library. The first knob functions as a volume control with the push-button working as a mute button. The encoder is used to control the grid spacing and the embedded button is used to switch between imperial and metric units. The entire code, as well as the schematic, is available on GitHub for your hacking pleasure. It’s a polished project just ready for you to adapt.

The project can be extended to be used with other computer software such as Gimp and the keys may be replaced by capacitive touch sensors making it more sturdy. Bluetooth can be added to make things wireless and you can check out the Double Action Keyboard to extend functionality further.

VIDEO

Posted in Arduino Hacks, peripherals hacksTagged , , , , ,


Read the full article here by Hack a Day

2018 F1 calendar announced by FIA – Germany and France return


Formula 1 will have three consecutive races for the first time next year as the FIA has announced its 2018 calendar will include the French, Austrian and British Grands Prix on successive weekends.

With the return of the French GP at Circuit Paul Ricard and the German round at Hockenheim, the 21-race calendar was approved at the FIA’s World Motor Sport Council meeting in Geneva, Switzerland.

The Azerbaijan GP moves from June to April, before Spain, while the Russian GP has switched from April to September, before the Japanese round; the Malaysian GP has dropped off the calendar after 19 years.

The 2018 Le Mans 24 Hours was also approved for June 16/17 and will not clash with F1. The rest of the World Endurance Championship schedule is yet to be announced.

The FIA says that the races in China and Singapore are subject to commercial rights holder confirmation and its next World Motor Sport Council meeting, to be attended by 110 National Sporting Authorities, will take place in Paris on September 21 2017.

Have your say in the comment section below or on our Facebook Page.

2018 F1 Calendar:

25 March

Melbourne

Australia

08 April

Shanghai

China

15 April

Sakhir

Bahrain

29 April

Baku

Azerbaijan

13 May

Barcelona

Spain

27 May

Monaco

Monaco

10 June

Montreal

Canada

24 June

Paul Ricard

France

01 July

Red Bull Ring

Austria

08 July

Silverstone

Great Britain

22 July

Hockenheim

Germany

29 July

Budapest

Hungary

26 August

Spa-Francorchamps

Belgium

02 September

Monza

Italy

16 September

Singapore

Singapore

30 September

Sochi

Russia

07 October

Suzuka

Japan

21 October

Austin

USA

28 October

Mexico City

Mexico

11 November

Sao Paulo

Brazil

25 November

Yas Marina

Abu Dhabi



Read the full article here by James Allen on F1 - The official website

Saturday, 17 June 2017

How Fan Films Helped Shape The LEGO Movie

Vox give us a look at how fan films helped shape The LEGO Movie."When you watch installments of the Warner Bros. line of Lego movies, it’s hard not to be struck by how realistic the animation is. It isn’t quite traditional stop motion — but it sure looks as if it could be. That’s largely thanks to the work of the animators at Animal Logic, a Sydney-based visual effects studio that has worked on The Lego Movie, The Lego Batman Movie, and the upcoming The Lego Ninjago Movie. Powered by live action filming techniques and a close attention to detail, the studio has helped reinvent what Lego animations can look like. But they owe a lot of that aesthetic to the influence of fan films. Since the early 1970s, enthusiasts have made home movies with their own Lego sets. They’re called Brickfilms — and they’ve grown into a sizable community producing great movies and helping many young animators get their start. The Lego Movie animators learned from what made those home movies so good by embracing the limitations of the medium, and creating a world that anyone could could rebuild at home."..(Read...)



Read the full article here by Likecool

Friday, 16 June 2017

Canada rules that all new cellphones must be unlocked

Canadians pay some of the highest wireless rates of any G7 nation, and to add insult to injury, they often have to shell out $50 or more to unlock cellphones when switching operators. However, the nation's wireless regulator, the CRTC, has now ordered carriers to unlock devices for free and decreed that all new smartphones must be sold unlocked. The move was prompted by excoriating public criticism on unlocking fees after the CRTC requested comment on new wireless rules.

"That's called a 'Ransom Fee' or 'Hostage Fee' in any other business," one person wrote, as noted by the CBC. "It is unbelievable how the government allows these companies to extort money like this!" Telecoms notably made $37.7 million CAD ($28.5 million) in unlocking fees, a jump of 75 percent since 2014.

Carriers disagreed with the CRTC, saying "we think it's a lot more appropriate that people who actually have their device unlocked bear the cost of the unlocking," said Rogers VP Howard Slawner. He failed to mention, however, that carriers are the ones that lock devices in the first place, and anyway, the CRTC is banning that, too.

With a new federal government in place, Canada's CRTC seems to be favoring more consumer-friendly internet laws. It recently ruled against a free music "zero-rating" scheme from carrier Videotron, which many critics believe violates the principal of net neutrality. Rather than creating hard rules, however, the regulator is laying out a "framework" and then ruling on potential violations on a case-by-case basis. By contrast, the US FCC under Ajit Pai wants to roll back consumer-friendly net neutrality rules put in place by Tom Wheeler under the Obama administration.

To give you an idea how much Canadians overpay on wireless, it's actually cheaper for me, an ex-pat Canadian who lives in Paris, to bring my French SIM card over when I visit. France-based carrier Free Mobile charges just €20 per month (about $30 CAD) with no contract, and includes a massive 25GB of data roaming per month in Canada and the US, with free calls and texts within those countries or to (and from) France. If I was to purchase and use a local SIM card in my unlocked phone, it would cost much more and give me much less.

Via: CBC

Source: CRTC



Read the full article here by Engadget

Thursday, 15 June 2017

Glibc Now Enables Tuning Framework By Default

Since the end of last year Glibc has offered a tunables framework that could be exposed via the

--enable-tunables

switch at compile-time while now it's being enabled by default.

Glibc's documentation describes their tunables support as:

Tunables is a feature in the GNU C Library that allows application authors and distribution maintainers to alter the runtime library behaviour to match their workload.

The tunable framework allows modules within glibc to register variables that may be tweaked through an environment variable.

As of

the latest Git code

, tunables is now enabled by default -- no longer requiring the enable-tunables switch at build-time but is now always on unless explicitly disabled.

Siddhesh Poyarekar's argument for enabling the tunables by default, "All of the major architectures are adopting tunables as a way to add tuning to the library, from hwcap_mask for aarch64 to HLE for s390 and ifunc and cache geometry for x86. Given this adoption and the fact that we don't want additional tuning knobs to be added outside of tunables, it makes sense to using this trivial patch."

The change will be found in glibc 2.26.



Read the full article here by Phoronix

Wednesday, 14 June 2017

Godot Engine Still Working On 3.0 Release, WebGL 2.0 & WebAssembly Work Too

The open-source Godot 2D/3D Game Engine continues working towards its big "Godot 3.0" release although it remains months behind schedule.

In a status update covering the past two months of work, there has been activity on a new customizable editor theme, WebGL 2.0 and WebAssembly support, support for Mono/C#, initial work on AR and VR support, scripting improvements, and more.

While they originally planned to release Godot 3.0 in fall of 2016, so far the 3.0 release isn't ready. But within a few weeks they hope to ship the 3.0 alpha and then around August they might be in a state for shipping Godot 3.0.0.

More details via today's

GodotEngine.org status report

.



Read the full article here by Phoronix

Greatly enhanced Google Backup and Sync to (mostly) replace Drive

Later this month -- June 28 specifically -- Google is to launch a new tool: Backup and Sync. Aimed primarily at consumers, the tool is designed to backup files and photos, and make them accessible from any device. Backup and Sync is described as "the latest version of Google Drive for Mac/PC, which is now integrated with the Google Photos desktop uploader," and for most users it will completely replace Drive. Corporate and enterprise users are advised to hold out until the release of Drive File Stream later in the year, but for everyone else, Backup and Sync opens up… [Continue Reading]


Read the full article here by Betanews

DIY Raspberry Neural Network Sees All, Recognizes Some

As a fun project I thought I’d put Google’s Inception-v3 neural network on a Raspberry Pi to see how well it does at recognizing objects first hand. It turned out to be not only fun to implement, but also the way I’d implemented it ended up making for loads of fun for everyone I showed it to, mostly folks at hackerspaces and such gatherings. And yes, some of it bordering on pornographic — cheeky hackers.

An added bonus many pointed out is that, once installed, no internet access is required. This is state-of-the-art, standalone object recognition with no big brother knowing what you’ve been up to, unlike with that nosey Alexa.

But will it lead to widespread useful AI? If a neural network can recognize every object around it, will that lead to human-like skills? Read on.

How To Do Object Recognition

Inception object recognizer internalsInception object recognizer internals

The implementation consists of:

  • Raspberry Pi 3 Model B
  • amplifier and speaker
  • PiCamera
  • momentary swtich
  • cellphone charger battery for the Pi

The heart of the necessary software is Google’s Inception neural network which is implemented using their TensorFlow framework. You can download it by following the TensorFlow tutorial for image recognition. The tutorial doesn’t involve any programing so don’t worry if you don’t know Python or TensorFlow. That is, unless you’re going to modify their sample code as I did.

 

classify_image.py printing that it saw a pandaclassify_image.py printing that it saw a panda

The sample code takes a fixed named file containing a picture of a panda and does object recognition on it. It gives the result by printing out that it saw a panda. But that wasn’t enough fun.

I hunted around for some text-to-speech software and found Festival. Now when it wants to say it saw a panda, I modified the sample code to run Festival in a linux shell and tell it to actually say “I saw a panda” to the speaker.

http://ift.tt/2sszWnP
But that still wasn’t fun enough. I connected a PiCamera to the Raspberry Pi, and had that take a photo and give it to the TensorFlow code to do object recognition. In the vernacular, it now ran inference on my photo.

And lastly, to make it all real easy I connected a momemtary switch to one of the Pi’s GPIO pins and took the photo when the momentary switch was pressed.

Here’s the Python program’s main() function before…

def main(_):
  maybe_download_and_extract()
  image = (FLAGS.image_file if FLAGS.image_file else
           os.path.join(FLAGS.model_dir, 'cropped_panda.jpg'))
  run_inference_on_image(image)

… and after.

def main(_):
  os.system("echo %s | festival --tts" % "Wait while I prepare my brain...")

  maybe_download_and_extract()
  # Creates graph from saved GraphDef.
  create_graph()

  # preparing for the switch
  GPIO.setmode(GPIO.BCM)
  GPIO.setup(17, GPIO.IN)

  camera = PiCamera()

  os.system("echo %s | festival --tts" % "I am ready to see things.")

  while True:
    # loop for the switch
    while (GPIO.input(17) == GPIO.LOW):
      time.sleep(0.01)

    # take and write a snapshot to a file
    image = os.path.join(FLAGS.model_dir, 'seeing_eye_image.jpg')
    camera.capture(image)

    os.system("echo %s | festival --tts" % "I am thinking about what you showed me...")
    human_string = run_inference_on_image(image)
    os.system("echo I saw a %s | festival --tts" % human_string)

The calls to os.system() are where I run the Festival text-to-speech program to make it say something to the speaker.

maybe_download_and_extract() is where Google’s Inception neural network would be downloaded from the Internet, if it’s not already present. By default, it downloads it to /tmp/imagenet which is on a RAM disk. The first time it did this, I copied it from /tmp/imagenet to /home/inception on the SD card and now run the program using a command line that includes where to find the Inception network.

Running the inception object recognizerRunning the inception object recognizer

The call to create_graph() was moved from inside the run_inference_on_image() function. create_graph() sets up the neural network, which you need do only once. Previously the program was a one-shot deal, but now it has an infinite while loop which calls run_inference_on_image() each time through the loop. Obviously, setting up the neural network is something you do only once (see our introduction to TensorFlow for more about graphs) so it had to be moved above the loop.

The run_inference_on_image() function is where the image is given to the neural network to do the object recognition. It used to just print out whatever it thought was in the image, but I modified it to instead return the text string containing what it thinks the object is, “coffee mug” for example. So the last line is where it would say “I saw a coffee mug” to the amplifier and speaker.

Boxing all that up gave me a small, standalone package that could be carried around and tried out by anyone. Here’s a video of it in action.

An improvement would be to add a small screen so that the user could see what the camera sees, but the PiCamera has a wide viewing angle and a screen turns out to be not necessary.

How Good Is Its Object Recognition

Inception seeing a tobacconistInception seeing a tobacconist

Showing it a cell phone often results in it saying it saw a cell phone, but sometimes an iPod. However, so far it has gotten water bottles and coffee mugs correct every time.

However, it doesn’t do well with people. Pointing it at me in my office causes it to say it saw a “tobacco shop, tobacconist shop, tobacconist”, probably due to the shelves of equipment and parts directly behind me. However, standing against a blank wall it said it saw a sweatshirt, removing that it saw a tee shirt, removing that, it said “bathing trunks, swim trunks”, despite seeing only my bare upper torso and head. (I’ll spare you the photo.)

The neural network is trained on a dataset called ImageNet, the version from the Large Visual Recognition Challenge of 2012. That dataset consists of a huge collection of images divided up into 1000 classes, each class containing images of a particular object. As you can see from this small sample from the cell phone class, some of the phone images are a little dated. However, objects such as coffee mugs don’t change over time.

But that didn’t stop everyone who played with it from having fun, walking around testing it on everything in sight, like finding a magic wand for the first time and waving it around to see what it could conjure.

Is That The Best You Can Do?

Well, first off, each recognition takes around 10 seconds on a Raspberry Pi 3 so either that has to be sped up or a faster processor used, preferably one with a CUDA-enabled Nvidia GPU since that’s the only type of GPU TensorFlow currently supports.

The Inception neural net is only as good as the data it’s trained on. The flaws I pointed out above regarding recognizing cell phones and people are issues with the ImageNet dataset. Only 3.46% of the time are all 5 of its best guesses wrong, whereas humans doing the same test are wrong in their 5 best guesses 5% of the time. Not bad.

As we pointed out in our article about the freaky stuff neural networks do today, Long Short Term Memory (LSTM) neural networks can examine what they see in a single frame of a video, while taking into account what came before in the video. For example, it has more confidence that it saw a beach ball instead of a basket ball if the preceeding scene was that of a beach party. That differs from the Inception neural network in that Inception has only the image you show it to go on.

Where Does This Get Us?

Will improved object recognition lead to widespread useful AI with human-like skills? The evolution of the eye is often cited as a major cause of the explosion in lifeforms known as the Cambrian explosion around 541 million years ago, though there is much debate about that being that cause.

When those eyes evolved, however, there was already some form of brain to use them. That brain already handled the senses of touch, vibration and smell. So improved object recognition alone wouldn’t cause a revolution. For human-like skills our AIs would need more intelligence. We currently have only bits and pieces of ideas of what we need for that.

What many agree on is that our AI would need to make predictions so that it could plan. For that it could have an internal model, or understanding, of the world to use as a basis for those predictions. For the human skill of applying a soldering tip to a wire, an internal model would predict what would happen when the tip made contact and then plan based on that. When the tip contacts the wire, if things don’t go as predicted then the AI would react.

Recent work from Facebook with Generative Adverserial Networks (GANs) may hint at a starting point here that contains such a model and predictive capability (if you’re not familiar with GANs, we again refer you to our article about the freaky stuff neural networks do today). The “generative” part of the name means that they generate images. But more specifically, these are deeply convoluted GANs, meaning that they contain an understanding of what they’ve seen in the images they’ve been trained on. For example, they know about windows, doors and TVs and where they go in rooms.

ADGL video predictionsADGL video predictions

What about making predictions? More work from Facebook involves video generation. Using Adversarial Gradient Difference Loss Predictors (AGDL) they predict what the next two frames of a video should be. In the photo of a billiards game you can see the ground truth, i.e. what really happened, and what the AGDL network predicted. It’s not very far into the future but it’s a start.

Those are at least small steps on the path from a naive object recognizer to one with human-like skills.

In Closing

Where might you have seen the Inception neural network recognizing objects before? We’ve covered [Lukas Biewald] using it on an RC car to recognize objects in his garage/workshop.

While this turned out to be fun for everyone to use as is, what other uses can you think of for it? What useful application can you think of? What can be added? Let us know in the comments below.



Read the full article here by Hack a Day

Tuesday, 13 June 2017

A video game trailer has been nominated for an Academy Award

Despite being one of the biggest entertainment mediums on the planet, it's rare to see video games getting the mainstream recognition that they deserve. Now, it looks like that's beginning to change, as, for the first time in history, a video game trailer has been nominated for an Academy Award. The promotional video in question is for this year's intriguing philosophical world simulator, Everything.

With its thought-provoking trailer running at almost 11 minutes long, the promo video highlights the interconnected nature of, well, everything. Overdubbed with a speech from renowned British philosopher, Alan Watts, the trailer uses in-game footage to show life in all its strange and wonderful forms. Thanks to its length and intriguing subject matter, this clever promotional video now stands a shot at winning an Academy Award for best animated short film.

Interestingly, This isn't the first time that the game's developer, David OReilly, has flirted with cinema. Using his gaming expertise and incorporating it into cinema, OReilly created the holographic video game scenes in Spike Jonze's brilliant sci-fi flick, Her.

For those who are curious about a game that lets you play as everything from an atom to a traffic cone, Everything is out now on PS4, PC and Mac. With the video already winning the Jury Prize for Animation at this year's Vienna Shorts film festival, perhaps it will go on to be the first ever video game trailer to win an Academy Award.

Source: Eurogamer



Read the full article here by Engadget

This is the GoPro Fusion: the company's all-in-one VR camera

GoPro surprised everybody when it teased its "Fusion" 360-degree action camera back in April. Mostly because GoPro rarely teases anything (Karma, perhaps being the exception), preferring to show up out of nowhere with a glossy release video and launch event. The half-reveal didn't tell us much, just that there was a camera coming in the fall, it had a resolution of 5.2K, a feature called "OverCapture" that would allow you to "punch out" a regular video from the spherical capture and well, that was kinda it.

Today the company is willing to show a lot more leg, as it were. In fact, GoPro is using the Mountain Games (which it sponsors) as a platform to show it to the world for the first time. We're here, and we're getting the chance to check it out for ourselves to see if it really is the "ultimate GoPro." In the meantime, here's a first look at the actual device itself.

We already knew from the teased image what the Fusion would look like. With its gunmetal gray exterior and grip-friendly textured edges, it's not dissimilar to the Hero5 Black. But there are some key differences. Obviously, the main one being that there are two lenses, one on the front and one on the rear.

Secondly, it's a shade bigger than the current Hero5 Black, it's also fully square, and not oblong like the teaser images implied. The two lenses are set into the body without the typical GoPro square/flat protection cap. This is so that the camera can get an unobstructed view through 180-degrees, and not just straight ahead of it. The units shown here in Vail are pre-production versions, but GoPro assures us this is the final design (bar perhaps some additional Fusion branding etc).

Despite having two "equal" sides, one of them has the classic small LCD display for showing what mode the camera is in. There are of course the usual power and highlight buttons too. We weren't able to get a look at the battery, yet, but hopefully we can get a sense on how much power it packs after some time with it. For now, here's what GoPro's Fusion 360 cam will look like!



Read the full article here by Engadget

Monday, 12 June 2017

How low-code software is meeting development challenges

There’s increasing demand to produce software quickly, which means companies are on the look out for alternatives to traditional coding. Many are turning to low-code platforms, workflow software specialist TrackVia has created an infographic charting the rise of low-code solutions. Among the highlights are that one in three companies are already using low-code solutions, and 43 percent are interested in using it. Its popularity is down to the fact that it addresses key challenges including compatibility with other software and apps, increased productivity and easier customization. Lower costs and the ability to offer mobile compatibility are seen as important too.… [Continue Reading]


Read the full article here by Betanews

Cancer Drug Proves To Be Effective Against Multiple Tumors

An anonymous reader writes: 86 cancer patients were enrolled in a trial of a drug that helps the immune system attack tumors. Though they had different kinds of tumor -- pancreas, prostate, uterus or bone -- they all shared a genetic mutation that disrupts their cells' ability to fix damaged DNA, found in 4% of all cancer patients. But tumors vanished and didn't return for 18 patients in the study, reports the New York Times, while 66 more patients "had their tumors shrink substantially and stabilize, instead of continuing to grow." The drug trial results were "so striking that the Food and Drug Administration already has approved the drug, pembrolizumab, brand name Keytruda, for patients whose cancers arise from the same genetic abnormality. It is the first time a drug has been approved for use against tumors that share a certain genetic profile, whatever their location in the body." The researchers say that just in the U.S. there are 60,000 new patients every year who could benefit from the new drug.
Share on Google+

Read more of this story at Slashdot.



Read the full article here by Slashdot