Personal Profile

DirectX 12

Direct3D 12 vs OpenGL: A quick Test

Direct3D 12 vs OpenGL: A quick Test

Direct3D 12 vs OpenGL: A quick Test

GeeXLab, the successor of GLSL Hacker, comes with the support of Direct3D 12. It’s not a full support but most of the basic things are available: command lists (CL), pipeline state objects (PSO), constant buffers (CB) and HLSL shaders.

An introduction to Direct3D programming with GeeXLab is available HERE.

Since GeeXLab is now available with an OpenGL and a Direct3D 12 renderers, here is a quick benchmark that shows the difference of performance between Direct3D 12 and OpenGL 3.2 in a very simple scene: a phong-textured mesh (a torus to be original) is rendered with various polygon density. This test uses one command list, one PSO, one HLSL program, one mesh and one texture.


GeeXLab - Direct3D 12 vs OpenGL

You can download both GeeXLab (version 0.9.3.0+ is recommended) and the test from THIS PAGE. The test is available in the host_api/Direct3D12_vs_OpenGL/ folder of the code sample pack (files: 09-lighting-mesh-d3d12.xml and 09-lighting-mesh-gl32.xml).

You can change the number of polygons by editing the source code of both files: lines 76-84 (09-lighting-mesh-d3d12.xml) and 47-54 (09-lighting-mesh-gl32.xml).

The results of this test should be taken with caution because it’s my first implementation of a Direct3D 12 plugin for GeeXLab and graphics drivers are also constantly updated. I will update this post as soon as I find bugs or bring some optimizations in GeeXLab that can change the results.

Testbed:

  • CPU: Intel Core i5 6600K @3.5GHz
  • Motherboard: ASUS Z170 Pro Gaming
  • RAM: 8GB DDR4 Corsair Vengeance
  • OS: Windows 10 64-bit
  • Drivers:
    • Radeon R9 290X: Catalyst 15.10 beta
    • GeForce GTX 970: R358.50
    • HD Graphics 530: v4279

Clock speeds: stock values for the CPU, memory and graphics cards.

The FPS in the following tables are the average framerates.

Direct3D 12 results

Triangles AMD Radeon R9 290X
(avg FPS / GPU load)
NVIDIA GeForce GTX 970
(avg FPS / GPU load)
Intel HD Graphics 530
(avg FPS)
800 9100 / 40% 5500 / 25% 1360
5’000 8200 / 45% 5300 / 35% 1220
20’000 5800 / 60% 5100 / 45% 1100
80’000 2400 / 80% 2600 / 70% 850
320’000 720 / 90% 700 / 85% 500
500’000 480 / 98% 480 / 90% 400
2’000’000 130 / 100% 130 / 97% 160

OpenGL 3.2 results

Triangles AMD Radeon R9 290X
(avg FPS / GPU load)
NVIDIA GeForce GTX 970
(avg FPS / GPU load)
Intel HD Graphics 530
(avg FPS)
800 4600 / 25% 3700 / 35% 1220
5’000 4300 / 25% 3600 / 35% 1160
20’000 4200 / 25% 3600 / 36% 1060
80’000 4100 / 30% 3600 / 58% 840
320’000 4100 / 46% 2800 / 87% 500
500’000 3200 / 70% 2200 / 90% 420
2’000’000 1000 / 100% 930 / 95% 180


Direct3D 12 vs OpenGL - benchmark results

According to this test, Direct3D 12 is faster than OpenGL when the number of triangles is low. AMD Radeon cards are particularly fast! Around 80K polygons, Direct3D offers the same kind of performance than OpenGL. Above 80K polygons, OpenGL is faster. The case of Intel GPU is interesting because it has more or less the same performances in D3D12 and GL. What’s more for a mesh of 2-million polygons, The Intel GPU is faster than a GTX 970 or a R9 290X in D3D12! It looks like for many polygons, there is a CPU-bottleneck somewhere in the D3D12 rendering pipeline that does not reflect the real power of GPUs.

The results are similar with latest drivers (R361.43 / Crimson 15.12).

I also did a simple draw stress test: a quad is rendered 100, 400 and 4000 times. No hardware instancing is used, each quad is rendered with its own draw call. I only tested on my dev system, with a GeForce GTX 960 + R361.43.


GeeXLab - Direct3D 12 vs OpenGL

The test is available in the host_api/Direct3D12_vs_OpenGL/ folder of the code sample pack (files: 08-drawstress-d3d12.xml and 08-drawstress-opengl.xml).

In this test, a quad is made up of 4 vertices and 2 triangles.

To change the number of quads, edit the xml file and look for the lines:

quads = {x=10, y=10, size=10.0} -- 100 quads
--quads = {x=40, y=10, size=10.0} -- 400 quads
--quads = {x=100, y=40, size=10.0} -- 4000 quads

Direct3D 12

Num quads GeForce GTX 960 (R361.43)
(avg FPS / GPU load)
100 2900 / 20%
400 1070 / 26%
4000 180 / 20%

OpenGL 3.2

Num quads GeForce GTX 960 (R361.43)
(avg FPS / GPU load)
100 1840 / 58%
400 730 / 30%
4000 97 / 20%

GeeXLab is maybe not the best tool for this kind of test (a loop with 4000 iterations) because of the overhead of the virtual machine (Lua and host API functions calls). A C/C++ based test should be better. But this GeeXLab test shows that we can draw more objects with Direct3D 12 than with OpenGL. This is particularly visible with 4000 quads: D3D12 is twice faster: 180FPS for D3D12 against 97 FPS for OpenGL.

What is Direct3D 12?

DirectX 12 introduces the next version of Direct3D, the 3D graphics API at the heart of DirectX. This version of Direct3D is faster and more efficient than any previous version. Direct3D 12 enables richer scenes, more objects, more complex effects, and full utilization of modern GPU hardware.

What makes Direct3D 12 better?

Direct3D 12 provides a lower level of hardware abstraction than ever before, which allows developers to significantly improve the multi-thread scaling and CPU utilization of their titles. With Direct3D 12, titles are responsible for their memory management. In addition, by using Direct3D 12, games and titles benefit from reduced GPU overhead via features such as command queues and lists, descriptor tables, and concise pipeline state objects.

Direct3D 12, and Direct3D 11.3, introduce a set of new features for the rendering pipeline: conservative rasterization to enable reliable hit detection, volume-tiled resources to enable streamed three dimension resources to be treated as if they were all in video memory, rasterizer ordered views to enable reliable transparency rendering, setting the stencil reference within a shader to enable special shadowing and other effects, and also improved texture mapping and typed Unordered Access View (UAV) loads.

Who is Direct3D 12 for?

Direct3D 12 provides four main benefits to graphics developers (compared with Direct3D 11): vastly reduced CPU overhead, significantly improved power consumption, up to around twenty percent improvement in GPU efficiency, and cross-platform development for a Windows 10 device (PC, tablet, console or phone).

Direct3D 12 is certainly for advanced graphics programmers, it requires a fine level of tuning and significant graphics expertise. Direct3D 12 is designed to make full use of multi-threading, careful CPU/GPU synchronization, and the transition and re-use of resources from one purpose to another. All techniques that require a considerable amount of memory level programming skill.

Another advantage that Direct3D 12 has is its small API footprint. There are around 200 methods, and about one third of these do all the heavy lifting. This means that a graphics developer should be able to educate themselves on – and master – the full API set without the weight of having to memorize a great volume of API calls.

Direct3D 12 does not replace Direct3D 11. The new rendering features of Direct3D 12 are available in Direct3D 11.3. Direct3D 11.3 is a low level graphics engine API, and Direct3D 12 goes even deeper.

There are at least two ways a development team can approach a Direct3D 12 title.

For a project that takes full advantage of the benefits of Direct3D 12, a highly customized Direct3D 12 game engine should be developed from the ground up.

One approach is that if graphics developers understand the use and re-use of resources within their titles, and can take advantage of this by minimizing uploading and copying, then a highly efficient engine can be developed and customized for these titles. The performance improvements could be very considerable, freeing up CPU time to increase the number of draw calls, and so adding more luster to graphics rendering.

The programming investment is significant, and debugging and instrumentation of the project should be considered from the very start: threading, synchronization and other timing bugs can be challenging.

A shorter term approach would be to address known bottlenecks in a Direct3D 11 title; these can be addressed by using the 11on12 or interop techniques enabling the two APIs to work together. This approach minimizes the changes necessary to an existing Direct3D 11 graphics engine, however the performance gains will be limited to the relief of the bottleneck that the Direct3D 12 code addresses.

Direct3D 12 is all about dramatic graphics engine performance: ease of development, high level constructs, and compiler support have been scaled back to enable this. Driver support and ease of debugging remain on a par with Direct3D 11.

Direct3D 12 is new territory, for the inquisitive expert to explore.

Multi-GPU DirectX 12 shootouts show AMD with performance lead over Nvidia

One of the most exciting parts of Microsoft’s DirectX 12 API is the ability to pair graphics cards of varying generations, performance, or even manufacturers together in a single PC, to pool their resources and thus make games and applications run better. Unfortunately, testing “Explicit Multi Adaptor” (EMA) support under real-world conditions (i.e. not synthetic benchmarks) has so far proven difficult. There’s only been one game designed to take advantage of DX12’s numerous low-level improvements—including asynchronous compute, which allows GPUs to execute multiple command queues simultaneously—and the early builds of that game didn’t feature support for multiple GPUs.

Multi-GPU DirectX 12 shootouts show AMD with performance lead over Nvidia

Multi-GPU DirectX 12 shootouts show AMD with performance lead over Nvidia

 

As you might have guessed from the headline of this story, it does now. The latest beta version of Stardock’s real-time strategy game Ashes of the Singularity includes full support for EMA, meaning that for the first time we can observe what performance boost (if any) we get by doing the previously unthinkable and sticking an AMD and Nvidia card into the same PC. That’s not to mention seeing how EMA stacks up again SLI or Crossfire—which have to be turned off in order to use DX12’s multi-GPU features—and whether AMD can repeat the ridiculous performance gains seen in the older Ashes benchmark.

Benchmarks conducted by a variety of sites, including Anandtech, Techspot, PC World, and Maximum PC all point to the same thing: EMA works, scaling can reach as high as 70 percent when adding a second GPU, and yes, AMD and Nvidia cards play nicely together.

That EMA works at all is something of an achievement for developer Stardock. Not only is it the first developer to implement the technology into an actual game, but doing so is hard going. Unlike older APIs like DX11 and OpenGL and multi-GPU support under the the proprietary systems developed by Nvidia (SLI) and AMD (Crossfire), you have to be a tenacious developer indeed to work with EMA and DX12. Under DX12, work that was previously handled by the driver has to be done manually. That’s a double-edged sword: if the developer knows what they’re doing, DX12 could provide a big performance uplift; but if they don’t, performance could actually decrease.

That said, developers do have a few options for implementing multiple GPUs under DX12. Implicit Multi Adapter (IMA) is the easiest, and is essentially like a DX12 version of Crossfire or SLI, with the driver doing most of the work to distribute tasks between GPUs (a feature not part of the Ashes benchmark). Then there’s EMA, which has two modes: linked or unlinked mode. Linked mode requires GPUs to be close to the same hardware, while unlinked—which is what Ashes uses—allows any mix of GPUs to be used. The whole point of this, and why this works at all under DX12, is to make use of Split Frame Rendering (SFR). This breaks down each frame of a game into several tiles, which are then rendered in parallel by the GPUs. This is different to the Alternate Frame Rendering (AFR) used in DX12, where each GPU renders an entire frame each, duplicating data across each GPU.

In theory, with EMA and SFR, performance should go way up. Plus, users should benefit from pooling graphics memory (i.e. using two 4GB GPUs would actually result in 8GB of usable graphics memory). The one bad thing about the Ashes benchmark? It currently only supports AFR.

Hitman could become the first big DirectX 12 game

IO Interactive and AMD team up for a big performance boost on Radeon graphics cards.
DirectX 12 may soon appear in a big-budget game with next month’s launch of Hitman.

Hitman could become the first big DirectX 12 game-Hitman 2016

Hitman could become the first big DirectX 12 game-Hitman 2016

AMD says it’s collaborating with Hitman developer IO Interactive to enable the next-generation graphics tech. It sounds like this will be the first game to take advantage of DirectX 12’s Asynchonous Shaders feature, which spreads different tasks (such as lighting, physics, and memory) across the GPU’s individual computing units, letting them all work at the same time. This should allow for big gains in image quality without a performance hit.

Indeed, Hitman might be the first DirectX 12 game on the market from a major publisher. The stealth action thriller is set to launch on March 11, long before other confirmed DirectX titles such as Deus Ex: Mankind Divided and Fable Legends. It’s possible that Gears of War: Ultimate Edition could sneak in sooner with an early 2016 launch, but so far Microsoft hasn’t given a specific release date.

Aside from those new releases, some existing games such as Just Cause 3 and The Elder Scrolls Online are also in the works. Some smaller games such as Descent: Underground added experimental DirectX 12 support last year.

To take advantage of DirectX 12, players will need to be running Windows 10—Microsoft has no plans to bring the tech to older versions—and AMD cards will need to run the company’s Graphics Core Next Architecture, covering nearly every card released since 2012.

Quantum Break Coming To PC April 5th

Remedy Entertainment’s Quantum Break is coming to PC April 5th. Telling the tale of time travel gone wrong, Quantum Break features a mix of high-fidelity third-person shooting, cinematic in-game cutscenes, and live action cutscenes that star Shawn Ashmore (X-Men‘s Iceman), Dominic Monaghan (The Lord of the Rings‘ Meriadoc Brandybuck), Aidan Gillen (Game of Thrones‘ Petyr Baelish), and other top talent.

Having previously raised the bar for graphics and cutscenes with Alan Wake and Max Payne, Remedy’s latest endeavor is poised to once again advance graphical fidelity and immersion with a raft of advanced effects and features, courtesy of their new in-house Northlight Engine.

Quantum Break PC Announcement ScreenshotTo experience the stunning scenes that Northlight and Quantum Break will produce, at a high level of fidelity, Remedy is recommending that gamers equip their systems with GeForce GTX 970 graphics cards. And for an “Ultra” experience, a GeForce GTX 980 Ti is recommended.

MINIMUM
RECOMMENDED
ULTRA
OS
Windows 10 (64-bit)
Windows 10 (64-bit)
Windows 10 (64-bit)
DirectX
DirectX 12
DirectX 12
DirectX 12
CPU
Intel Core i5-4460, 2.70GHz or
AMD FX-6300
Intel Core i5 4690, 3.9GHz or
AMD equivalent
Intel Core i7 4790, 4GHz or
AMD equivalent
GPU
VRAM
2 GB
4 GB
6 GB
RAM
8 GB
16 GB
16 GB

If you’re itching to buy Quantum Break but can’t decide between the Xbox One and Windows 10 versions, don’t fret – Quantum Break is kick-starting Microsoft’s Cross-Buy initiative. Simply put, if you buy Quantum Break for Xbox One at a participating retailer you’ll also receive a free copy of the game for PC. Furthermore, any progress make in the game is automatically shared between your PC and Xbox One, enabling you to continue the story from where you left off on either platform.

Quantum Break PC Announcement ScreenshotFor further details about the PC edition of Quantum Break stay tuned to GeForce.com. In the meantime, check out a batch of new screenshots below.

Quantum Break PC Announcement Screenshot Quantum Break PC Announcement Screenshot Quantum Break PC Announcement Screenshot Quantum Break PC Announcement Screenshot Quantum Break PC Announcement Screenshot Quantum Break PC Announcement Screenshot

Fable Legends: AMD and Nvidia go head-to-head in latest DirectX 12 benchmark

As DirectX 12 and Windows 10 roll out across the PC ecosystem, the number of titles that support Microsoft’s new API is steadily growing. Last month, we previewed Ashes of the Singularity and its DirectX 12 performance; today we’re examining Microsoft’s Fable Legends. This upcoming title is expected to debut on both Windows PCs and the Xbox One and is built with Unreal Engine 4.

Like Ashes, Fable Legends is still very much a work-in-progress. Unlike Ashes of the Singularity, which can currently be bought and played, Microsoft chose to distribute a standalone benchmark for its first DirectX 12 title. The test has little in the way of configurable options and performs a series of flybys through complex environments. Each flyby highlights a different aspect of the game, including its day/night cycle, foliage and building rendering, and one impressively ugly troll. If Ashes of the Singularity gave us a peek at how DX12 would handle several dozen units and intense particle effects, Fable Legends looks more like a conventional first-person RPG or FPS.

Fable2

There are other facets to Fable Legends that make this a particularly interesting match-up, even if it’s still very early in the DX12 development cycle. Unlike Ashes of the Singularity, which is distributed through Oxide, this is a test distributed directly by Microsoft. It uses the Unreal 4 engine — and Nvidia and Epic, Unreal’s developer, have a long history of close collaboration. Last year, Nvidia announced GameWorks support for UE4, and the UE3 engine was an early supporter of PhysX on both Ageia PPUs and later, Nvidia GeForce cards.

Test setup

We tested the GTX 980 Ti and Radeon Fury X in Windows 10 using the latest version of the operating system. Our testbed was an Asus X99-Deluxe motherboard with a Core i7-5960X, 16GB of DDR4-2667 memory. We tested an AMD-provided beta driver for the Fury X and with Nvidia’s latest WHQL-approved driver, 355.98. NVidia hasn’t released a beta Windows 10 driver since last April, and the company didn’t contact us to offer a specific driver for the Fable Legends debut.

Fable3

The benchmark itself was provided by Microsoft and can run in a limited number of modes. Microsoft provided three presets — a 720p “Low” setting, a 1080p “Ultra” and a 4K “Ultra” benchmark. There are no user-configurable options besides enabling or disabling V-Sync (we tested with V-Sync disabled) and the ability to specify low settings or ultra settings. There is no DX11 version of the benchmark. We ran all three variants on both the Fury X and GTX 980 Ti.

Test Results (Original and Amended):

Once other sites began posting their own test results, it became obvious that our own 980 Ti and Fury X benchmarks were both running more slowly than they should have. It’s normal to see some variation between review sites, but gaps of 15-20% in a benchmark with no configurable options? That meant a different problem. Initial retests confirmed the figures shown below, even after wiping and reinstalling drivers.

FableLegends

The next thing to check was power management — and this is where we found our smoking gun. We tested Windows 10 in its “Balanced” power configuration, which is our standard method of testing all hardware. While we sometimes increase to “High Performance” in corner cases or to measure its impact on power consumption, Windows can generally be counted on to handle power settings, and there’s normally no performance penalty for using this mode.

Imagine our surprise, then, to see the following when we fired up the Fable benchmark:

 

Fable-Bench

The benchmark is actively running in the screenshot above, with power conservation mode and clock speed visible at the same time. And while CPU clock speed isn’t the determining factor in most titles, clocking down to 1.17GHz is guaranteed to have an impact on overall frame rates. Switching to “High Performance” pegged the CPU clock between 3.2 and 3.3GHz — exactly where we’d expect it to be. It’s not clear what caused this problem — it’s either a BIOS issue with the Asus X99-Deluxe or an odd driver bug in Windows 10, but we’ve retested both GPUs in High Performance mode.

Fable-RetestThese new results are significantly different from our previous tests. 4K performance is unchanged, and the two GPUs still tie, but 1080p performance improves by roughly 8% on the GTX 980 Ti and 6% on the Fury X. Aftermarket GTX 980 Ti results show higher-clocked manufacturing variants of that card as outperforming the R9 Fury X, and those are perfectly valid data points — if you want to pay the relatively modest price premium for a high-end card with more clock headroom, you can expect a commensurate payoff in this test. Meanwhile, the R9 Fury X no longer wins 720p as it did before. Both cards are faster here, but the GTX gained much more from the clock speed boost, leaping up 27%, compared to just 2% for AMD. While this conforms to our general test trends in DX11, in which AMD performs more capably at higher resolutions, it’s still unusual to see only one GPU respond so strongly to such ludicrously low clock speeds.

These new runs, like the initials, were performed multiple times. We ran the benchmark 4x on each card, at each quality preset, but threw out the first run in each case. We also threw out runs that appeared unusually far from the average.

Why include AMD results?

In our initial coverage for this article, we included a set of AMD-provided test results. This was mostly done for practical reasons — I don’t actually have an R9 390X, 390, or R9 380, and therefore couldn’t compare performance in the midrange graphics stack. Our decision to include this information “shocked” Nvidia’s PR team, which pointed out that no other reviewer had found the R9 390 winning past the GTX 980.

Implications of impropriety deserve to be taken seriously, as do charges that test results have misrepresented performance. So what’s the situation here? While we may have shown you chart data before, AMD’s reviewer guide contains the raw data values themselves. According to AMD, the GTX 980 scored 65.36 FPS in the 1080p Ultra benchmark using Nvidia’s 355.98 driver (the same we driver we tested). Our own results actually point to the GTX 980 being slightly slower — when we put the card through its paces for this section of our coverage, it landed at 63.51 FPS. Still, that’s just a 3% difference.

AMD-Perf1

It’s absolutely true that Tech Report’s excellent coverage shows the GTX 980 beating past the R9 390  (TR was the only website to test an R9 390 in the first place). But that doesn’t mean AMD’s data is non-representative. Tech Report notes that it used a Gigabyte GTX 980, with a base clock of 1228MHz and a boost clock of 1329MHz. That’s 9% faster than the clocks on my own reference GTX 980 (1127MHz and 1216MHz respectively).

Multiply our 63.51 FPS by 1.09x, and you end up with 69 FPS — exactly what Tech Report reported for the GTX 980. And if you have an NV GTX 980 clocked at this speed, yes, you will outperform a stock-clocked R9 390. That, however, doesn’t mean that AMD lied in its test results. A quick trip to Newegg reveals that GTX 980s ship in a variety of clocks, from a low of 1126MHz to a high of 1304MHz. That, in turn, means that the highest-end GTX 980 is as much as 15% faster than the stock model. Buyers who tend to buy on price are much more likely to end up with cards at the base frequency, the cheapest EVGA GTX 980 is $459, compared to $484 for the 1266MHz version.

AMD-Perf2

There’s no evidence that AMD lied or misconstrued the GTX 980’s performance. Neither did Tech Report. Frankly, we prefer testing retail hardware when such equipment is available, but since GPU vendors tend to charge a premium for higher-clocked GPUs, it’s difficult to select any single card and declare it representative.

Amended Conclusion:

Nvidia’s overall performance in Fable Legends remains excellent, though whether Team Red or Green wins is going to depend on which type of card, specifically, you’ve chosen to purchase. The additional headroom left in many of Nvidia’s current designs is a feature, not a bug, and while it makes it more difficult to point at any single point and declare it representative of GTX 980 Ti or 980 performance, we suspect most enthusiasts appreciate the additional headroom.

The power issues that forced a near-total rewrite of this story, however, also point to the immaturity of the DirectX 12 ecosystem. Whether you favor AMD or Nvidia, it’s early days for both benchmarks and GPUs, and we wouldn’t recommend making drastic decisions around expected future DirectX 12 capability. There are still unanswered questions and unclear situations surrounding certain DirectX 12 features, like asynchronous computing on Nvidia cards, but the overall performance story from Team Red vs. Team Green is positive. The fact that a stock R9 390, at $329, outperforms a stock GTX 980 with an MSRP of $460, however, is a very nice feather in AMD’s cap.

 

GeForce GTX 980 Notebooks a VR Developer’s Dream

Virtual reality takes immense amounts of computing horsepower. Creating VR content takes even more graphics grunt. No surprise, then, that content creators have been asking us for VR-ready performance in a notebook.

Today they’re getting it. That’s thanks to a spate of new notebooks equipped with the power of our GeForce GTX 980 GPU.

That’s no typo. These notebooks use the same 2,048-core GM204 GPU found in our GeForce GTX 980 graphics cards. They’re equipped with GDDR5 memory, fast CPUs, multiple USB 3.0 ports and direct HDMI out, making them the world’s first notebooks to meet (and exceed) the recommended spec for Oculus Rift.

Run Unreal Engine 4? Not a problem. “The GTX 980 notebook includes a fully loaded GTX 980 GPU in a laptop form factor. It’s a great platform for utilizing Unreal Engine 4’s high-end features, such as physically based shading, DirectX 12 and virtual reality device support,” says Tim Sweeney, founder and CEO of Epic Games.

Blog_NB_900_KV_VR_Final_V3_NoText

Power demanding VR games? Without breaking a sweat. “The GeForce GTX 980 notebook is a very impressive piece of hardware. EVE: Valkyrie runs super smooth on it with rock-solid performance,” says Owen O’Brien, executive producer, EVE: Valkyrie, CCP Games.

These machines are designed for heavy-duty gaming and game development. Both the CPU and GPU are overclockable. Many are equipped with G-SYNC display technology for stutter-free, tear-free frame delivery. And those are just the highlights.

Whether a developer is showcasing a new VR demo at an event or creating the next killer VR app, GeForce GTX 980 notebooks can deliver on the go. Borrowing a rig or safely packing one up to hit the road is now a thing of the past.

The first GeForce GTX 980-equipped notebooks will be available in October. The full list of OEMs include: MSI, Aorus, ASUS, Clevo, Origin PC, Maingear, Falcon NW, Digital Storm , Sager, XMG, PC Specialist, LDLC, Hyperbook, G-Tune, AfterShock, BossMonster, Metabox and Terransforce.

GameWorks VR SDK Update

To coincide with the release of this new hardware, our GameWorks VR SDK gets an update with the following features:

  • GeForce GTX notebook support
  • VR SLI enhancements when using Direct Mode
  • Expanded head-mount display supported
  • Bug fixes and stability improvements

Download our latest display driver to get the GameWorks VR updates. Stay tuned for more VR news at Oculus Connect later this week!



Popular Pages
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein

You are what you believe yourself to be.

Paulo Coelho

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho

Anyone who has never made a mistake has never tried anything new.

Albert Einstein

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

Imagination is more important than knowledge.

Albert Einstein

Gravitation is not responsible for people falling in love.

Albert Einstein


Site by images
Statistics
  • 2,319
  • 10,075
  • 92,535
  • 25,499
Recent News Posts