Personal Profile

Nvidia

1 2 3 7

ASUS GeForce GTX 1080 TURBO Review

 

ASUS GeForce GTX 1080 TURBO Review

ASUS GeForce GTX 1080 TURBO Review

 

1 – Overview

This GTX 1080 TURBO is the simplest GTX 1080 I tested. By simplest, I mean the graphics card comes with a simple VGA cooler (nothing to see with the GTX 1080 Strix!), no factory overclocking and a minimal bundle.

The GTX 1080 TURBO is powered by a Pascal GP104 GPU clocked at 1607MHz (base clock) and 1733MHz (boost clock). Both clock speeds are the reference ones, no out of the box overclocking. The card has 8GB of GDDR5X graphics memory clocked at 10010MHz like NVIDIA reference model.

ASUS GTX 1080 TURBO homepage can be found HERE.

2 – Gallery

The bundle is minimal: the GTX 1080, a user’s guide, a CDROM with drivers + utilities and an invite code for World Warships:

ASUS GeForce GTX 1080 TURBO

ASUS GeForce GTX 1080 TURBO


The GTX 1080 TURBO:

ASUS GeForce GTX 1080 TURBO

ASUS GeForce GTX 1080 TURBO

ASUS GeForce GTX 1080 TURBO


No backplate… not enough expensive to deserve a backplate!


ASUS GeForce GTX 1080 TURBO

The GTX 1080 TURBO comes with one 8-pin power connector: the total power draw can reach 225W (150W + 75W). The TDP of the reference GTX 1080 is 180W. The diameter of the fan: 65mm.


ASUS GeForce GTX 1080 TURBO

Two DisplayPort 1.4, two HDMI 2.0 and one DVI connectors are present.


ASUS GeForce GTX 1080 TURBO

A LED is available to indicate a good power supply (white color=OK, red color=ERROR).


ASUS GeForce GTX 1080 TURBO

The GTX 1080 Turbo versus GTX 1080 Strix.


ASUS GeForce GTX 1080 TURBO

3 – GPU Data


ASUS GeForce GTX 1080 TURBO + GPU Caps Viewer
ASUS GeForce GTX 1080 TURBO + GPU Shark

4 – Benchmarks

Testbed configuration:
– CPU: Intel Core i5 6600K @ 3.5GHz
– Motherboard: ASUS Z170 Pro Gaming
– Memory: 8GB DDR4 Corsair Vengeance LPX @ 2666MHz
– PSU: Corsair AX860i
– Software: Windows 10 64-bit + NVIDIA R376.09

4.1 – 3DMark Sky Diver

29024 – ASUS GeForce GTX 1080 Strix – R368.51
28328 – ASUS GeForce GTX 1080 TURBO – R376.09
26828 – EVGA GeForce GTX 1070 FTW – R376.09
25134 – ASUS GeForce GTX 980 Ti – R353.06
23038 – ASUS GeForce GTX 980 Strix – R344.75
21964 – MSI Radeon R9 290X Gaming – Catalyst 14.9 WHQL
21811 – Gainward GeForce GTX 970 Phantom – R344.75
20274 – EVGA GeForce GTX 780 – R344.75
17570 – MSI Radeon HD 7970 – Catalyst 14.9 WHQL
17533 – EVGA GeForce GTX 680 – R344.75

4.2 – 3DMark Fire Strike

Fire Strike is a Direct3D 11 benchmark for high-performance gaming PCs with serious graphics cards.


3DMark Fire Strike

15583 – ASUS GeForce GTX 1080 Strix – R368.51
14810 – ASUS GeForce GTX 1080 TURBO – R376.09
13438 – EVGA GeForce GTX 1070 FTW – R376.09
12514 – ASUS GeForce GTX 980 Ti – R353.06
10574 – ASUS GeForce GTX 980 Strix – R344.75
9382 – MSI Radeon R9 290X Gaming – Catalyst 14.9 WHQL
8870 – MSI GTX 970 CLASSIC 4GD5T OC – R344.75
8203 – EVGA GeForce GTX 780 – R344.75
6572 – MSI Radeon HD 7970 – Catalyst 14.9 WHQL
6399 – ASUS Strix GTX 960 DC2 OC 4GB – R353.06
6235 – EVGA GeForce GTX 680 – R344.75

4.3 – 3DMark Fire Strike Ultra

5125 (Graphics score: 5330) – ASUS GeForce GTX 1080 Strix – R368.51
4865 – ASUS GeForce GTX 1080 TURBO – R376.09
4244 – EVGA GeForce GTX 1070 FTW – R376.09
2617 (Graphics score: 2592) – MSI GTX 970 CLASSIC 4GD5T OC – R368.69
2178 (Graphics score: 2134) – EVGA GeForce GTX 780 – R368.69

4.4 – 3DMark Time Spy

6393 (Graphics score: 7449) – ASUS GeForce GTX 1080 Strix – R372.54
6162 – ASUS GeForce GTX 1080 TURBO – R376.09
5358 – EVGA GeForce GTX 1070 FTW – R376.09
4177 (Graphics score: 4274) – EVGA GeForce GTX 1060 SC – R368.81
3658 (Graphics score: 3640) – MSI Radeon RX 470 Gaming X – Crimson 16.8.2
3410 (Graphics score: 3382) – MSI GTX 970 CLASSIC 4GD5T OC – R368.69

4.5 – FurMark 1.18

FurMark is an OpenGL 2 benchmark that renders a furry donut. This benchmark is known for its extreme GPU workload.


FurMark
Settings: Preset:1080 (1920×1080)

7151 points (119 FPS) – ASUS GeForce GTX 1080 Strix – R368.51
7063 points (118 FPS) – ASUS GeForce GTX 1080 TURBO – R376.09
6233 points (103 FPS) – ASUS GeForce GTX 980 Ti – R353.06
6143 points (102 FPS) – EVGA GeForce GTX 1070 FTW – R376.09
4660 points (77 FPS) – ASUS GeForce GTX 980 Strix – R344.75
4592 points (76 FPS) – MSI Radeon R9 290X Gaming – Catalyst 14.9 WHQL
4050 points (67 FPS) – EVGA GeForce GTX 780 – R344.75
3335 points (55 FPS) – MSI GTX 970 CLASSIC 4GD5T OC – R344.75
2951 points (49 FPS) – MSI Radeon HD 7970 – Catalyst 14.9 WHQL
2733 points (45 FPS) – EVGA GeForce GTX 680 – R344.75
2566 points (42 FPS) – ASUS Strix GTX 960 DC2 OC 4GB – R353.06

Settings: Preset:2160 (3840×2160)

2715 points (45 FPS) – ASUS GeForce GTX 1080 Strix – R368.51
2624 points (44 FPS) – ASUS GeForce GTX 1080 TURBO – R376.09
2201 points (37 FPS) – EVGA GeForce GTX 1070 FTW – R376.09
1385 points (23 FPS) – EVGA GeForce GTX 780 – R368.69
1339 points (22 FPS) – MSI GTX 970 CLASSIC 4GD5T OC – R368.69

4.6 – Resident Evil 6 Benchmark

Resident Evil 6 (RE6) is a Direct3D 9 benchmark. RE6 benchmark can be downloaded from this page.


Resident Evil 6
Settings: Resolution: 1920 x 1080, anti-aliasing: FXAA3HQ, all params to high.

21410 points – ASUS GeForce GTX 1080 Strix – R372.54
21295 points – ASUS GeForce GTX 1080 TURBO – R376.09
20869 points – EVGA GeForce GTX 1070 FTW ACX3.0 – R376.09
18527 points – EVGA GeForce GTX 1060 SC – R372.54
16332 points – MSI GTX 970 Classic – R353.06
14522 points – MSI Radeon R9 290X Gaming 4GB – Crimson 16.8.2
13789 points – MSI Radeon RX 470 Gaming X 8GB – Crimson 16.8.2
13405 points – EVGA GTX 780 – R353.06
11935 points – ASUS Strix GTX 960 DC2 OC 4GB – R353.06
11442 points – EVGA GTX 680 – R353.06
8794 points – MSI GTX 660 Hawk – R353.06
5714 points – ASUS GTX 750 + R353.06
4495 points – ASUS G551Jw notebook w/ GTX 960M 4GB + R353.06

4.7 – Unigine Valley 1.0

Unigine Valley is a Direct3D/OpenGL benchmark from the same dev team than Unigine Heaven. More information can be found HERE and HERE.


Unigine Valley
Settings: Extreme HD (Direct3D 11, 1920×1080 fullscreen, 8X MSAA)

102.0 FPS, Score: 4269 – ASUS GeForce GTX 1080 Strix – R372.54
101.0 FPS, Score: 4227 – ASUS GeForce GTX 1080 TURBO – R376.09
90.5 FPS, Score: 3788 – EVGA GeForce GTX 1070 FTW ACX3.0 – R376.09
86.1 FPS, Score: 3602 – ASUS GeForce GTX 980 Ti – R353.06
68.0 FPS, Score: 2846 – EVGA GeForce GTX 1060 SC – R372.54
67.8 FPS, Score: 2837 – ASUS GeForce GTX 980 Strix – R344.75
63.3 FPS, Score: 2648 – MSI Radeon R9 290X Gaming – Crimson 16.8.2
58.7 FPS, Score: 2457 – Gainward GeForce GTX 970 Phantom – R344.75
57.8 FPS, Score: 2418 – EVGA GeForce GTX 780 – R344.75
56.0 FPS, Score: 2344 – MSI GTX 970 CLASSIC 4GD5T OC – R344.75
46.4 FPS, Score: 1942 – MSI Radeon RX 470 Gaming X 8GB – Crimson 16.8.2
42.9 FPS, Score: 1796 – EVGA GeForce GTX 680 – R344.75
39.9 FPS, Score: 1668 – MSI Radeon HD 7970 – Catalyst 14.9 WHQL
35.8 FPS, Score: 1500 – ASUS Strix GTX 960 DC2 OC 4GB – R353.06
34.6 FPS, Score: 1446 – EVGA GeForce GTX 580 – R344.75
32.4 FPS, Score: 1358 – MSI GTX 660 Hawk – R353.06
29.3 FPS, Score: 1224 – Sapphire Radeon HD 6970 – Catalyst 14.9 WHQL
25.6 FPS, Score: 1071 – EVGA GeForce GTX 480 – R344.75
19.4 FPS, Score: 812 – ASUS GeForce GTX 750 – R344.75
16.2 FPS, Score: 679 – ASUS Radeon HD 7770 DC – Catalyst 14.9 WHQL

5 – Burn-in Test

Testbed configuration:
– CPU: Intel Core i5 6600K @ 3.5GHz
– Motherboard: ASUS Z170 Pro Gaming
– Memory: 8GB DDR4 Corsair Vengeance LPX @ 2666MHz
– PSU: Corsair AX860i
– Software: Windows 10 64-bit + NVIDIA R376.09

At idle state, the total power consumption of the testbed is 38W. The GPU temperature is 30°C. The VGA cooler is barely audible but we can hear it (open case).

To stress test the GTX 1080 TURBO, I’m going to use the latest FurMark 1.18.2. A resolution of 1024×768 is enough to stress test the graphics card.

The first stress test is done with the default power target: 100%TDP. After 5 minutes, the total power consumption of the testbed was 233W and the GPU temperature was 79°C.

Before starting the second stress test, I quickly launched MSI Afterburner and set the power target to the maximal value. For this GTX 1080 TURBO, the max value is 120%TDP. Now results are a bit different: the total power consumption jumped to 267W and the GPU temperature reached 83°C. The VGA cooler was noisy…


ASUS GeForce GTX 1080 TURBO - FurMark stress test

An approximation of the graphics card power consumption is:

P = (267 – 38 – 20) x 0.9
P = 188W @ 120%TDP

where 0.9 the the power efficiency factor of the Corsair AX860i PSU, and 20W is the additional power draw of the CPU.

Thermal Imaging

Idle state


ASUS GeForce GTX 1080 TURBO - Thermal imaging - idle state

Load state


ASUS GeForce GTX 1080 TURBO - Thermal imaging - stress test

6 – Conclusion

This GTX 1080 TURBO is a basic GTX 1080. The performances are good and in the expected range for a GTX 1080 but that’s all. The card has a cheap VGA cooler: at idle the noise is barely audible (good!) but under heavy load, the cooler is noisy (not good!!). And the 0dB fan technology we can find on other models? Not present… This kind of VGA cooler should not be there: it’s a GTX 1080 and a high-end graphics card based on a GP104 GPU deserves a decent VGA cooler.

The GPU temperature at idle state is good (30°C) but can exceed 80°C on load. There is no backplate for mechanical protection and heat dissipation. Compared to other models like the GTX 1080 Strix, this card is cheaper. So if you really need a GTX 1080 for its graphics performances but you don’t want to spend too much money, this is your card.

Now if you hesitate, maybe a graphics card like the EVGA GTX 1070 FTW would be a better choice: very good performances, noiseless and cheaper…


ASUS GeForce GTX 1080 TURBO
Thanks to Internex for this ASUS GTX 1080 Turbo!

NVIDIA TITAN Xp vs TITAN X

NVIDIA TITAN Xp

NVIDIA TITAN Xp

NVIDIA has more or less silently launched a new high end graphics card around 10 days ago. Here are some pictures of the TITAN Xp versus the TITAN X (launched last year), both gaming cards being based on a Pascal GP102 GPU…

The TITAN Xp is based on a full GP102 GPU with 3840 CUDA cores while the TITAN X has only 3584 CUDA cores. The TITAN Xp has 240 texture units while the TITAN X has 224 texture units. Both cards have the same number of ROPs (96) and the same amount of memory: 12GB of GDDR5X.

On the physical side, there are few differences between the TITAN Xp and the TITAN X. NVIDIA has not changed the name on the VGA shroud: TITAN X for both cards…

NVIDIA TITAN Xp

NVIDIA TITAN Xp


The only things that distinguish both cards are the box, the PCB color (brown for the Xp) and the DVI connector (the Xp has no DVI connector):

NVIDIA TITAN Xp vs TITAN X

NVIDIA TITAN Xp vs TITAN X

NVIDIA TITAN Xp vs TITAN X

NVIDIA TITAN Xp vs TITAN X

NVIDIA TITAN Xp vs TITAN X

NVIDIA TITAN Xp vs TITAN X

NVIDIA TITAN Xp vs TITAN X

NVIDIA TITAN Xp vs TITAN X


The TITAN Xp alone:

NVIDIA TITAN Xp

NVIDIA TITAN Xp

NVIDIA TITAN Xp

NVIDIA TITAN Xp

Using Virtual Reality at the IBM Watson Image Recognition Hackathon

Using Virtual Reality at the IBM Watson Image Recognition Hackathon

Using Virtual Reality at the IBM Watson Image Recognition Hackathon

Five teams of developers gathered at the Silicon Valley Virtual Reality (SVVR) headquarters in California last month to learn about the new features of IBM Watson’s Visual Recognition service, like the ability to train and retrain custom classes on top of the stock API, that allow the service to have new and interesting use cases in VR when combined with the Watson Unity SDK. Staff from IBM, NVIDIA and SVVR were on-site for the event to help developers gain the best experience.

The Watson Developer Cloud Unity SDK makes it easy for developers to take advantage of modern AI techniques through a set of cloud-based services with a simple REST APIs. These APIs can be accessed from the Unity development environment by using the Watson Unity SDK, available on GitHub, so anyone can take the code and improve upon it.

This was the first experience building training sets for image recognition for many of the developers, but the Watson API made things relatively simple. Teams were able to get started within a few minutes.

After two days of hacking, the winning team was Watson and Waffles, an intriguing adventure game which required the player to sketch game objects using the Vive controller. Watson identified the objects, and then the game manifested them for the player to use.

Using Virtual Reality at the IBM Watson Image Recognition Hackathon

Using Virtual Reality at the IBM Watson Image Recognition Hackathon

The game was innovative — combining VR adventuring with room scale sketching using motion controllers. The winning team received an NVIDIA TITAN X GPU and an HTC Vive VR setup.

“It was fantastic to experiment with the new AI image recognition technologies in all new ways in VR, and now my mind is running wild with ideas of how AI could become an essential part of a game developer’s toolbox — allowing us to rapid prototype things that would otherwise require hours of entangled webs of logic,” said Michael Jones, developer on the Watson and Waffles team.

The runner-up teams built a game based around recognizing playing cards with the Vive’s front-facing camera and a VR panoramic photo viewing application that allowed uses to take a virtual journey through their memories.

“I really wanted to do a Watson VR hackathon focusing on a single service — in this case visual recognition — to see what creative and interesting new use cases developers could come up with, and thanks to the high quality of participants and the support of great partners like NVIDIA, HTC and SVVR, we were blown away by the results,” said Michael Ludden, Product Manager, Developer Relations at IBM Watson who also mentioned they have plans of hosting similar hackathons in the future.

Watson uses GPU-acceleration to power components of the image recognition API. Image recognition can be used in a variety of Virtual and Augmented reality applications, and besides being able to recognize images, Watson has services for interactive speech, natural language processing, translation, speech-to-text, text-to-speech and data analytics, among others.

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Researchers at The Australian National University are using deep learning and NVIDIA technologies to better understand the progression of Parkinson’s disease.

Currently it is difficult to determine what type of Parkinson’s someone has or how quickly the condition will progress.
The study will be conducted over the next five years at the Canberra Hospital in Australia and will involve 120 people suffering from the disease and an equal number of non-sufferers as a controlled group.

“There are different types of Parkinson’s that can look similar at the point of onset, but they progress very differently,” says Dr Deborah Apthorp of the ANU Research School of Psychology. “We are hoping the information we collect will differentiate between these different conditions.”

Researchers Alex Smith (L) and Dr Deborah Anthrop (R) work with Parkinson’s disease sufferer Ken Hood (middle).

Researchers Alex Smith (L) and Dr Deborah Anthrop (R) work with Parkinson’s disease sufferer Ken Hood (middle).

Dr Apthorp said the research will measure brain imaging, eye tracking, visual perception and postural sway.

From the data collected during the study, the researchers will be using a GeForce GTX 1070 GPU and cuDNN to train their deep learning models to help find patterns that indicate degradation of motor function correlating with Parkinson’s.

The researchers plan to incorporate virtual reality into their work by having the sufferers’ wear head-mounted displays (HMDs), which will help them better understand how self-motion perception is altered in Parkinson’s disease, and use stimuli that mimics the visual scene during self-motion.

“Additionally, we would like to explore the use of eye tracking built into HMDs, which is a much lower cost alternative to a full research eye tracking system and reduces the amount of equipment into a highly portable and versatile single piece of equipment,” says researcher Alex Smith.

Introducing Parker, NVIDIA’s Newest SOC for Autonomous Vehicles

NVIDIA today took the cloak off Parker, our newest mobile processor that will power the next generation of autonomous vehicles.

Get Under the Hood of Parker, Our Newest SOC for Autonomous Vehicles

Get Under the Hood of Parker, Our Newest SOC for Autonomous Vehicles

Speaking at the Hot Chips conference in Cupertino, California, we revealed the architecture and underlying technology of this highly advanced processor, which is ideally suited for automotive applications like self-driving cars and digital cockpits.

You may recall we mentioned Parker at CES earlier this year, when we introduced the NVIDIA DRIVE PX 2 platform (shown above). That platform uses two Parker processors and two Pascal architecture-based GPUs to power deep learning applications. More than 80 carmakers, tier 1 suppliers and university research centers around the world are now using our DRIVE PX 2 system to develop autonomous vehicles. This includes Volvo, which plans to road test DRIVE PX 2 systems in XC90 SUVs next year.

Parker diagramForging a Future for Automotive

Parker delivers class-leading performance and energy efficiency, while supporting features important to the automotive market such as deep learning, hardware-level virtualization for tighter design integration, a hardware-based safety engine for reliable fault detection and error processing, and feature-rich IO ports for automotive integration.

Built around NVIDIA’s highest performing and most power-efficient Pascal GPU architecture and the next generation of NVIDIA’s revolutionary Denver CPU architecture, Parker delivers up to 1.5 teraflops(1) of performance for deep learning-based self-driving AI cockpit systems.

Need for Speed

Parker delivers 50 to 100 percent higher multi-core CPU performance than other mobile processors(2). This is thanks to its CPU architecture consisting of two next-generation 64-bit Denver CPU cores (Denver 2.0) paired with four 64-bit ARM Cortex A57 CPUs. These all work together in a fully coherent heterogeneous multi-processor configuration.

The Denver 2.0 CPU is a seven-way superscalar processor supporting the ARM v8 instruction set and implements an improved dynamic code optimization algorithm and additional low-power retention states for better energy efficiency. The two Denver cores and the Cortex A57 CPU complex are interconnected through a proprietary coherent interconnect fabric.

A new 256-core Pascal GPU in Parker delivers the performance needed to run advanced deep learning inference algorithms for self-driving capabilities. And it offers the raw graphics performance and features to power multiple high-resolution displays, such as cockpit instrument displays and in-vehicle infotainment panels.

Scalable Architecture

Working in concert with Pascal-based supercomputers in the cloud, Parker-based self-driving cars can be continually updated with newer algorithms and information to improve self-driving accuracy and safety.

Parker includes hardware-enabled virtualization that supports up to eight virtual machines. Virtualization enables carmakers to use a single Parker-based DRIVE PX 2 system to concurrently host multiple systems, such as in-vehicle infotainment systems, digital instrument clusters and driver assistance systems.

Parker is also a scalable architecture. Automakers can use a single unit for highly efficient systems. Or they can integrate it into more complex designs, such as NVIDIA DRIVE PX 2, which employs two Parker chips along with two discrete Pascal GPU cores.

In fact, DRIVE PX 2 delivers an unprecedented 24 trillion deep learning operations per second to run the most complex deep learning-based inference algorithms. Such systems deliver the supercomputer level of performance that self-driving cars need to safely navigate through all kinds of driving environments.

parker_specifications_two

Parker Specifications

To address the needs of the automotive market, Parker includes features such as a dual-CAN (controller area network) interface to connect to the numerous electronic control units in the modern car, and Gigabit Ethernet to transport audio and video streams. Compliance with ISO 26262 is achieved through a number of safety features implemented in hardware, such as a safety engine that includes a dedicated dual-lockstep processor for reliable fault detection and processing.

Parker is architected to support both decode and encode of video streams up to 4K resolution at 60 frames per second. This will enable automakers to use higher resolution in-vehicle cameras for accurate object detection, and 4K display panels to enhance in-vehicle entertainment experiences.

Expect to see more details on Parker’s architecture and capabilities as we accelerate toward making the self-driving car a reality.

  1. References the native FP16 (16-bit floating-point) processing capability of Parker.
  2. Based on SpecINT2K-Rate performance measured on Parker development platform and devices based on competing mobile processors.

Advanced Real-Time Visualization for Robotic Heart Surgery

Advanced Real-Time Visualization for Robotic Heart Surgery

Advanced Real-Time Visualization for Robotic Heart Surgery

Researchers at the Harvard Biorobotics Laboratory are harnessing the power of GPUs to generate real-time volumetric renderings of patients’ hearts. The team has built a robotic system to autonomously steer commercially available cardiac catheters that can acquire ultrasound images from within the heart. They tested their system in the clinic and reported their results at the 2016 IEEE International Conference on Robotics and Automation (ICRA) in Stockholm, Sweden.

The team used an Intracardiac Echocardiography (ICE) catheter, which is equipped with an ultrasound transducer at the tip, to acquire 2D images from within a beating heart. Using NVIDIA GPUs, the team was able to reconstruct a 4D (3D + time) model of the heart from these ultrasound images.

Generating a 4D volume begins with co-registering ultrasound images that are acquired at different imaging angles but at the same phase of the cardiac cycle. The position and rotation of each image with respect to the world coordinate frame is measured using electromagnetic (EM) trackers that are attached to the catheter body. This point cloud is then discretized to lie on a 3D grid. Next, infilling is performed to fill the gaps between the slices, generating a dense volumetric representation of the heart. Finally, the volumes are displayed to the surgeon using volume rendering via raycasting, leveraging the CUDA – OpenGL interoperability. The team accelerated the volume reconstruction and rendering algorithms using two NVIDIA TITAN GPUs.

“ICE catheters are currently seldom used due to the difficulty in manual steering,” said principal investigator Prof. Robert D. Howe, Abbott and James Lawrence Professor of Engineering at Harvard University. “Our robotic system frees the clinicians of this burden, and presents them with a new method of real-time visualization that is safer and higher quality than the X-ray imaging that is used in the clinic. This is an enabling technology that can lead to new procedures that were not possible before, as well as improving the efficacy of the current ones.”

Providing real-time procedure guidance requires the use of efficient algorithms combined with a high-performance computing platform. Images are acquired at up to 60 frames per second from the ultrasound machine. Generating volumetric renderings from these images in real-time is only possible using GPUs.

NVIDIA TITAN X

NVIDIA TITAN X

The Ultimate.

The NVIDIA TITAN X, featuring the NVIDIA Pascal architecture, is the ultimate graphics card. Whatever you’re doing, this groundbreaking TITAN X gives you the power to accomplish things you never thought possible.
Limit 2 per customer
Watch VideoWATCH FULL VIDEO

NVIDIA PASCAL

THE WORLD’S MOST ADVANCED GPU ARCHITECTURE

GeForce GTX 10-series graphics cards are powered by Pascal to deliver up to 3x the performance of previous-generation graphics cards, plus innovative new gaming technologies and breakthrough VR experiences.

UP TO 3XFASTER PERFORMANCE
LATEST GAMING TECHNOLOGIES
NEXT-GEN VR EXPERIENCES
*Up to 3X faster performance for GeForce GTX 10 Series when compared to the GTX 900 Series

Irresponsible Amount of Performance

We packed the most raw horsepower we possibly could into this GPU. Driven by 3584 NVIDIA CUDA® cores running at 1.5GHz, TITAN X packs 11 TFLOPs of brute force. Plus it’s armed with 12 GB of GDDR5X memory – one of the fastest memory technologies in the world.

NVIDIA TITAN X
GPU Architecture Pascal
Frame Buffer 12 GB G5X
Memory Speed 10 Gbps
Boost Clock 1531 MHz

View Full Specs

Design Excellence

TITAN X is crafted to offer superior heat dissipation using vapor chamber cooling technology in a die cast aluminum body. It’s a powerful combination of brilliant efficiency, stunning design, and industry-leading performance.

NVIDIA SLI® TECHNOLOGY

SLI ULTRA BRIDGESLI

GEFORCE GTX SLI HB BRIDGE

NVIDIA’s new SLI bridge doubles the available transfer bandwidth compared to the NVIDIA Maxwell architecture. Delivering silky-smooth gameplay, it’s the best way to experience surround gaming—and it’s only compatible with the NVIDIA TITAN X, GeForce GTX 1080 and GeForce GTX 1070 graphics cards.

$XXX.XX
SELECT BRIDGE SIZE
2-Slot
3-Slot
4-Slot

Help me choose which size >

BUY NOW
Limit 1 per customer

SIgn up for the GeForce GTX Newsletter

JOIN NOW

Shop Now

NVIDIA TITAN X

NVIDIA TITAN X
Limit 2 per customer

World’s First Real-Time 3D Oil Painting Simulator

World’s First Real-Time 3D Oil Painting Simulator

World’s First Real-Time 3D Oil Painting Simulator

The painting and drawing tools most people use are 2D, but now a new project gives artists the ability to choose any brush they like, a limitless array of paint colors, and use the same natural twists and turns of the brush to create the rich textures of oil painting, all on a digital canvas.

Delivering such a realistic, physically-based painting tool requires some heavy-duty computational power, so Adobe Research collaborated with NVIDIA to create the world’s first real-time simulation-based 3D painting system with bristle-level interactions entirely with CUDA. Adobe Researchers Zhili Chen and Byungmoon Kim originally developed Project Wetbrush in 2015, but and have collaborated with NVIDIA software experts to optimize their application performance, allowing them to add even more GPU-accelerated features to the system.

This is just the beginning for the project. Using deep learning, some of the most computationally challenging physical simulations could potentially be added to create more responsive and realistic brush dynamics, or the system could even learn from itself.

World’s First Real-Time 3D Oil Painting Simulator

World’s First Real-Time 3D Oil Painting Simulator

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

A new project by Carnegie Mellon University researchers provides journalists, citizen scientists, and other researchers with the ability to quickly scan large geographical regions for specific visual features.

Simply click on a feature in the satellite imagery – a baseball diamond, cul-de-sac, tennis court – and Terrapattern will find other things that look similar in the area and pinpoint them on the map.

Using a deep learning neural network trained for five days on an NVIDIA GeForce GPU, their model will look at small squares of the landscape and, comparing those patterns to a huge database of tagged map features from OpenStreetMap, it learned to associate them with certain concepts.

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

Currently, Terrapattern is limited to Pittsburgh, San Francisco, New York City and Detroit, but access to more cities is coming soon.

Cleaning Up Radioactive Waste from World War II With Supercomputing

Cleaning Up Radioactive Waste from World War II With Supercomputing

Cleaning Up Radioactive Waste from World War II With Supercomputing

The Handford site in southeastern Washington is the largest radioactive waste site in the United Sates and is still awaiting cleanup after more than 70 years. Cleaning up radioactive waste is extremely complicated since some elements stay radioactive for thousands of years.

Scientists from Lawrence Berkeley National Laboratory and six universities: The State University of New York at Buffalo, University of Alabama, University of Minnesota, Washington State University and Rice University are using the NVIDIA Tesla GPU-accelerated Titan supercomputer at Oak Ridge National Laboratory to study the chemistry of radioactive elements called actinides — uranium, plutonium and other metals that release huge amounts of energy when their atoms are split.

Cleaning Up Radioactive Waste from World War II With Supercomputing

Cleaning Up Radioactive Waste from World War II With Supercomputing

The supercomputer is providing the scientists with simulations of the chemical reactions which will help them develop new methods of decontaminating the waste.

Cleaning Up Radioactive Waste from World War II With Supercomputing

Cleaning Up Radioactive Waste from World War II With Supercomputing

1 2 3 7


Popular Pages
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

Imagination is more important than knowledge.

Albert Einstein

You are what you believe yourself to be.

Paulo Coelho

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

Anyone who has never made a mistake has never tried anything new.

Albert Einstein

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho

Gravitation is not responsible for people falling in love.

Albert Einstein

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein


Site by images
Statistics
  • 6,637
  • 19,350
  • 63,548
  • 18,587
Recent News Posts