AMD
AMD Ryzen Downcore Control

AMD Ryzen Downcore Control
AMD Ryzen 7 processors comes with a nice feature: the downcore control. This feature allows to enable / disable cores. Ryzen 7 and Ryzen 5 chips use the same die, which is made up of two CCX (Cpu CompleX), each CCX having 4 cores. So disabling cores on Ryzen 7 makes it possible to emulate a Ryzen 5 CPU.

AMD Ryzen CCX
The downcore control is an option available in the BIOS of X370 motherboards (maybe on other chipsets like the B350 too, I don’t know).
Here is the downcore control in the MSI X370 Gaming Pro Carbon BIOS:

MSI X370 Gaming Pro Carbon – Downcore control in the BIOS
A Ryzen 7 CPU has two CCX with all cores enabled (8 cores or 8C/16T). It’s a HEIGHT (4 + 4) or Auto configuration:
1 | 2 | 3 | 4 |
1 | 2 | 3 | 4 |

AMD Ryzen 7 Downcore control – Auto
A Ryzen 5 1600 or 1600X has two CCX and 6 cores (6C/12T) enabled:
SIX (3 + 3)
1 | 2 | 3 | 4 |
1 | 2 | 3 | 4 |

AMD Ryzen 7 Downcore control – SIX (3 + 3)
A Ryzen 5 1400 or 1500X has two CCX and 4 cores (4C/8T) enabled. This can be emulated on a Ryzen 7 with two configurations: FOUR (2 + 2) or FOUR (4 + 0).
FOUR (2 + 2)
1 | 2 | 3 | 4 |
1 | 2 | 3 | 4 |
FOUR (4 + 0)
This configuration is interesting because it uses only one CCX and avoids CCX inter-connection issues (mainly the slowness: two cores on different CCX can communicate at around 30GB/s –via Ryzen Infinity Fabric which depends on the memory controller clock speed– while two cores on the same CCX can communicate at 175GB/s).
1 | 2 | 3 | 4 |
1 | 2 | 3 | 4 |

AMD Ryzen 7 Downcore control – FOUR (2 + 2)
The downcore control allows to emulate 3C/6T and 2C/4T CPUs (Ryzen 3?) with the following configurations:
THREE (3 + 0)
1 | 2 | 3 | 4 |
1 | 2 | 3 | 4 |

AMD Ryzen 7 Downcore control – THREE (3 + 0)
TWO (1 + 1)
1 | 2 | 3 | 4 |
1 | 2 | 3 | 4 |
TWO (2 + 0)
1 | 2 | 3 | 4 |
1 | 2 | 3 | 4 |

AMD Ryzen 7 Downcore control – TWO (1 + 1)
Multi-GPU DirectX 12 shootouts show AMD with performance lead over Nvidia
One of the most exciting parts of Microsoft’s DirectX 12 API is the ability to pair graphics cards of varying generations, performance, or even manufacturers together in a single PC, to pool their resources and thus make games and applications run better. Unfortunately, testing “Explicit Multi Adaptor” (EMA) support under real-world conditions (i.e. not synthetic benchmarks) has so far proven difficult. There’s only been one game designed to take advantage of DX12’s numerous low-level improvements—including asynchronous compute, which allows GPUs to execute multiple command queues simultaneously—and the early builds of that game didn’t feature support for multiple GPUs.
As you might have guessed from the headline of this story, it does now. The latest beta version of Stardock’s real-time strategy game Ashes of the Singularity includes full support for EMA, meaning that for the first time we can observe what performance boost (if any) we get by doing the previously unthinkable and sticking an AMD and Nvidia card into the same PC. That’s not to mention seeing how EMA stacks up again SLI or Crossfire—which have to be turned off in order to use DX12’s multi-GPU features—and whether AMD can repeat the ridiculous performance gains seen in the older Ashes benchmark.
Benchmarks conducted by a variety of sites, including Anandtech, Techspot, PC World, and Maximum PC all point to the same thing: EMA works, scaling can reach as high as 70 percent when adding a second GPU, and yes, AMD and Nvidia cards play nicely together.
That EMA works at all is something of an achievement for developer Stardock. Not only is it the first developer to implement the technology into an actual game, but doing so is hard going. Unlike older APIs like DX11 and OpenGL and multi-GPU support under the the proprietary systems developed by Nvidia (SLI) and AMD (Crossfire), you have to be a tenacious developer indeed to work with EMA and DX12. Under DX12, work that was previously handled by the driver has to be done manually. That’s a double-edged sword: if the developer knows what they’re doing, DX12 could provide a big performance uplift; but if they don’t, performance could actually decrease.
That said, developers do have a few options for implementing multiple GPUs under DX12. Implicit Multi Adapter (IMA) is the easiest, and is essentially like a DX12 version of Crossfire or SLI, with the driver doing most of the work to distribute tasks between GPUs (a feature not part of the Ashes benchmark). Then there’s EMA, which has two modes: linked or unlinked mode. Linked mode requires GPUs to be close to the same hardware, while unlinked—which is what Ashes uses—allows any mix of GPUs to be used. The whole point of this, and why this works at all under DX12, is to make use of Split Frame Rendering (SFR). This breaks down each frame of a game into several tiles, which are then rendered in parallel by the GPUs. This is different to the Alternate Frame Rendering (AFR) used in DX12, where each GPU renders an entire frame each, duplicating data across each GPU.
In theory, with EMA and SFR, performance should go way up. Plus, users should benefit from pooling graphics memory (i.e. using two 4GB GPUs would actually result in 8GB of usable graphics memory). The one bad thing about the Ashes benchmark? It currently only supports AFR.
Hitman could become the first big DirectX 12 game
IO Interactive and AMD team up for a big performance boost on Radeon graphics cards.
DirectX 12 may soon appear in a big-budget game with next month’s launch of Hitman.
AMD says it’s collaborating with Hitman developer IO Interactive to enable the next-generation graphics tech. It sounds like this will be the first game to take advantage of DirectX 12’s Asynchonous Shaders feature, which spreads different tasks (such as lighting, physics, and memory) across the GPU’s individual computing units, letting them all work at the same time. This should allow for big gains in image quality without a performance hit.
Indeed, Hitman might be the first DirectX 12 game on the market from a major publisher. The stealth action thriller is set to launch on March 11, long before other confirmed DirectX titles such as Deus Ex: Mankind Divided and Fable Legends. It’s possible that Gears of War: Ultimate Edition could sneak in sooner with an early 2016 launch, but so far Microsoft hasn’t given a specific release date.
Aside from those new releases, some existing games such as Just Cause 3 and The Elder Scrolls Online are also in the works. Some smaller games such as Descent: Underground added experimental DirectX 12 support last year.
To take advantage of DirectX 12, players will need to be running Windows 10—Microsoft has no plans to bring the tech to older versions—and AMD cards will need to run the company’s Graphics Core Next Architecture, covering nearly every card released since 2012.
A Defining Moment for Heterogeneous Computing
Fable Legends: AMD and Nvidia go head-to-head in latest DirectX 12 benchmark
As DirectX 12 and Windows 10 roll out across the PC ecosystem, the number of titles that support Microsoft’s new API is steadily growing. Last month, we previewed Ashes of the Singularity and its DirectX 12 performance; today we’re examining Microsoft’s Fable Legends. This upcoming title is expected to debut on both Windows PCs and the Xbox One and is built with Unreal Engine 4.
Like Ashes, Fable Legends is still very much a work-in-progress. Unlike Ashes of the Singularity, which can currently be bought and played, Microsoft chose to distribute a standalone benchmark for its first DirectX 12 title. The test has little in the way of configurable options and performs a series of flybys through complex environments. Each flyby highlights a different aspect of the game, including its day/night cycle, foliage and building rendering, and one impressively ugly troll. If Ashes of the Singularity gave us a peek at how DX12 would handle several dozen units and intense particle effects, Fable Legends looks more like a conventional first-person RPG or FPS.
There are other facets to Fable Legends that make this a particularly interesting match-up, even if it’s still very early in the DX12 development cycle. Unlike Ashes of the Singularity, which is distributed through Oxide, this is a test distributed directly by Microsoft. It uses the Unreal 4 engine — and Nvidia and Epic, Unreal’s developer, have a long history of close collaboration. Last year, Nvidia announced GameWorks support for UE4, and the UE3 engine was an early supporter of PhysX on both Ageia PPUs and later, Nvidia GeForce cards.
Test setup
We tested the GTX 980 Ti and Radeon Fury X in Windows 10 using the latest version of the operating system. Our testbed was an Asus X99-Deluxe motherboard with a Core i7-5960X, 16GB of DDR4-2667 memory. We tested an AMD-provided beta driver for the Fury X and with Nvidia’s latest WHQL-approved driver, 355.98. NVidia hasn’t released a beta Windows 10 driver since last April, and the company didn’t contact us to offer a specific driver for the Fable Legends debut.
The benchmark itself was provided by Microsoft and can run in a limited number of modes. Microsoft provided three presets — a 720p “Low” setting, a 1080p “Ultra” and a 4K “Ultra” benchmark. There are no user-configurable options besides enabling or disabling V-Sync (we tested with V-Sync disabled) and the ability to specify low settings or ultra settings. There is no DX11 version of the benchmark. We ran all three variants on both the Fury X and GTX 980 Ti.
Test Results (Original and Amended):
Once other sites began posting their own test results, it became obvious that our own 980 Ti and Fury X benchmarks were both running more slowly than they should have. It’s normal to see some variation between review sites, but gaps of 15-20% in a benchmark with no configurable options? That meant a different problem. Initial retests confirmed the figures shown below, even after wiping and reinstalling drivers.
The next thing to check was power management — and this is where we found our smoking gun. We tested Windows 10 in its “Balanced” power configuration, which is our standard method of testing all hardware. While we sometimes increase to “High Performance” in corner cases or to measure its impact on power consumption, Windows can generally be counted on to handle power settings, and there’s normally no performance penalty for using this mode.
Imagine our surprise, then, to see the following when we fired up the Fable benchmark:
The benchmark is actively running in the screenshot above, with power conservation mode and clock speed visible at the same time. And while CPU clock speed isn’t the determining factor in most titles, clocking down to 1.17GHz is guaranteed to have an impact on overall frame rates. Switching to “High Performance” pegged the CPU clock between 3.2 and 3.3GHz — exactly where we’d expect it to be. It’s not clear what caused this problem — it’s either a BIOS issue with the Asus X99-Deluxe or an odd driver bug in Windows 10, but we’ve retested both GPUs in High Performance mode.
These new results are significantly different from our previous tests. 4K performance is unchanged, and the two GPUs still tie, but 1080p performance improves by roughly 8% on the GTX 980 Ti and 6% on the Fury X. Aftermarket GTX 980 Ti results show higher-clocked manufacturing variants of that card as outperforming the R9 Fury X, and those are perfectly valid data points — if you want to pay the relatively modest price premium for a high-end card with more clock headroom, you can expect a commensurate payoff in this test. Meanwhile, the R9 Fury X no longer wins 720p as it did before. Both cards are faster here, but the GTX gained much more from the clock speed boost, leaping up 27%, compared to just 2% for AMD. While this conforms to our general test trends in DX11, in which AMD performs more capably at higher resolutions, it’s still unusual to see only one GPU respond so strongly to such ludicrously low clock speeds.
These new runs, like the initials, were performed multiple times. We ran the benchmark 4x on each card, at each quality preset, but threw out the first run in each case. We also threw out runs that appeared unusually far from the average.
Why include AMD results?
In our initial coverage for this article, we included a set of AMD-provided test results. This was mostly done for practical reasons — I don’t actually have an R9 390X, 390, or R9 380, and therefore couldn’t compare performance in the midrange graphics stack. Our decision to include this information “shocked” Nvidia’s PR team, which pointed out that no other reviewer had found the R9 390 winning past the GTX 980.
Implications of impropriety deserve to be taken seriously, as do charges that test results have misrepresented performance. So what’s the situation here? While we may have shown you chart data before, AMD’s reviewer guide contains the raw data values themselves. According to AMD, the GTX 980 scored 65.36 FPS in the 1080p Ultra benchmark using Nvidia’s 355.98 driver (the same we driver we tested). Our own results actually point to the GTX 980 being slightly slower — when we put the card through its paces for this section of our coverage, it landed at 63.51 FPS. Still, that’s just a 3% difference.
It’s absolutely true that Tech Report’s excellent coverage shows the GTX 980 beating past the R9 390 (TR was the only website to test an R9 390 in the first place). But that doesn’t mean AMD’s data is non-representative. Tech Report notes that it used a Gigabyte GTX 980, with a base clock of 1228MHz and a boost clock of 1329MHz. That’s 9% faster than the clocks on my own reference GTX 980 (1127MHz and 1216MHz respectively).
Multiply our 63.51 FPS by 1.09x, and you end up with 69 FPS — exactly what Tech Report reported for the GTX 980. And if you have an NV GTX 980 clocked at this speed, yes, you will outperform a stock-clocked R9 390. That, however, doesn’t mean that AMD lied in its test results. A quick trip to Newegg reveals that GTX 980s ship in a variety of clocks, from a low of 1126MHz to a high of 1304MHz. That, in turn, means that the highest-end GTX 980 is as much as 15% faster than the stock model. Buyers who tend to buy on price are much more likely to end up with cards at the base frequency, the cheapest EVGA GTX 980 is $459, compared to $484 for the 1266MHz version.
There’s no evidence that AMD lied or misconstrued the GTX 980’s performance. Neither did Tech Report. Frankly, we prefer testing retail hardware when such equipment is available, but since GPU vendors tend to charge a premium for higher-clocked GPUs, it’s difficult to select any single card and declare it representative.
Amended Conclusion:
Nvidia’s overall performance in Fable Legends remains excellent, though whether Team Red or Green wins is going to depend on which type of card, specifically, you’ve chosen to purchase. The additional headroom left in many of Nvidia’s current designs is a feature, not a bug, and while it makes it more difficult to point at any single point and declare it representative of GTX 980 Ti or 980 performance, we suspect most enthusiasts appreciate the additional headroom.
The power issues that forced a near-total rewrite of this story, however, also point to the immaturity of the DirectX 12 ecosystem. Whether you favor AMD or Nvidia, it’s early days for both benchmarks and GPUs, and we wouldn’t recommend making drastic decisions around expected future DirectX 12 capability. There are still unanswered questions and unclear situations surrounding certain DirectX 12 features, like asynchronous computing on Nvidia cards, but the overall performance story from Team Red vs. Team Green is positive. The fact that a stock R9 390, at $329, outperforms a stock GTX 980 with an MSRP of $460, however, is a very nice feather in AMD’s cap.
-
Shangul Mangul HabeAngur Shangul Mangul HabeAngur (City of Goats) is a game for child (4-8 years). they learn how be useful in the city and respect to people. Persian n...
-
Resume Full name Sayed Ahmadreza Razian Age 38 (Sep 1982) Website ahmadrezarazian.ir Email ahmadrezarazian@gmail.com Education MS...
-
Drowning Detection by Image Processing In this research, I design an algorithm for image processing of a swimmer in a pool. This algorithm diagnostics the swimmer status. Every time graph s...
-
Tianchi-The Purchase and Redemption Forecasts 2015 Special Prize – Tianchi Golden Competition (2015) “The Purchase and Redemption Forecasts” in Big data (Alibaba Group) Among 4868 teams. Introd...
-
معرفی نام و نام خانوادگی سید احمدرضا رضیان پست الکترونیکی ahmadrezarazian@gmail.com درجات علمی کارشناسی : ریاضی کاربردی – دانشگاه اصفهان (معدل 14...
-
Nokte – نکته نرم افزار کاربردی نکته نسخه 1.0.8 (رایگان) نرم افزار نکته جهت یادداشت برداری سریع در میزکار ویندوز با قابلیت ذخیره سازی خودکار با پنل ساده و کم ح...
-
1st National Conference on Computer Games-Challenges and Opportunities 2016 According to the public relations and information center of the presidency vice presidency for science and technology affairs, the University of Isfah...
-
Tianchi-Brick and Mortar Store Recommendation with Budget Constraints Ranked 5th – Tianchi Competition (2016) “Brick and Mortar Store Recommendation with Budget Constraints” (IJCAI Socinf 2016-New York,USA)(Alibaba Group...
-
Optimizing raytracing algorithm using CUDA Abstract Now, there are many codes to generate images using raytracing algorithm, which can run on CPU or GPU in single or multi-thread methods. In t...
-
2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 Faculty of Nursing and Midwifery – University of Isfahan – 2 Aug 2016 - Ass...
-
My City This game is a city simulation in 3d view. Gamer must progress the city and create building for people. This game is simular the Simcity.
-
ببین و بپر به زودی.... لینک صفحه : http://bebinbepar.ir
-
SVM Review On this review, i compare 4 papers about 4 famous models by SVM. These models are : Maximum Likelihood Classification (ML) Backpropagatio...
-
Watching Jumping Coming Soon... Visit at : http://bebinbepar.ir/
-
Environmental Education Software In this game , Kids learn that They must respect to Nature and the Environment. This game was created in 3d . 600x420 (0x0) 66.45 KB ...
- AMD Ryzen Downcore Control AMD Ryzen 7 processors comes with a nice feature: the downcore control. This feature allows to enable / disabl...
- Fallout 4 Patch 1.3 Adds NVIDIA HBAO+ and FleX-Powered Weapon Debris Fallout 4 launched last November to record player numbers, swiftly becoming the most popular third-party game...
- Detecting and Labeling Diseases in Chest X-Rays with Deep Learning Researchers from the National Institutes of Health in Bethesda, Maryland are using NVIDIA GPUs and deep learni...
- NVIDIA TITAN Xp vs TITAN X NVIDIA has more or less silently launched a new high end graphics card around 10 days ago. Here are some pictu...
- Automatic Colorization Automatic Colorization of Grayscale Images Researchers from the Toyota Technological Institute at Chicago and University of Chicago developed a fully aut...
- Back to Dinosaur Island Back to Dinosaur Island takes advantage of 15 years of CRYENGINE development to show users the sheer endless p...
- Unity – What’s new in Unity 5.3.3 The Unity 5.3.3 public release brings you a few improvements and a large number of fixes. Read the release not...
- ASUS GeForce GTX 1080 TURBO Review This GTX 1080 TURBO is the simplest GTX 1080 I tested. By simplest, I mean the graphics card comes with a simp...
- Assisting Farmers with Artificial Intelligence With our planet getting warmer and warmer, and carbon dioxide levels steadily creeping up, companies are using...
- کودا – CUDA کودا به انگلیسی (CUDA) که مخفف عبارت انگلیسی Compute Unified Device Architecture است یک سکوی پردازش موازی و مد...
- Virtual Reality in the Military Virtual reality has been adopted by the military – this includes all three services (army, navy and air force)...
- Diablo Meets Dark Souls in Isometric Action-RPG Eitr Among the indie games Sony showcased during its E3 press conference this week, Eitr was what most stood out to...
- Diagnosing Cancer with Deep Learning and GPUs Using GPU-accelerated deep learning, researchers at The Chinese University of Hong Kong pushed the boundaries...
- Head-mounted Displays (HMD) Head-mounted displays or HMDs are probably the most instantly recognizable objects associated with virtual rea...
- Quantum Break Coming To PC April 5thRemedy Entertainment's Quantum Break is coming to PC April 5th. …
- Next PlayStation 4 update will let you play your games on Windows PCs and MacsSony said in November that Remote Play support was coming …
- 5 Startups Playing Big, and Betting on the Future, with Deep LearningReal Life Analytics: Accurate, Automatic Ads To power targeted in-store …
- Unreal Engine 4.10 Release NotesThis release brings hundreds of updates for Unreal Engine 4, …
- New Deep Learning Method Enhances Your SelfiesResearchers from Adobe Research and The Chinese University of Hong …
- Diablo Meets Dark Souls in Isometric Action-RPG EitrAmong the indie games Sony showcased during its E3 press …
- Raspbian Jessie Updated with a new Desktop Environment called PIXELThe Raspberry Pi team has released a new version of …
- World’s Fastest Commercial Drone Powered by Jetson TX1Records were made to be broken, and drones are no …
- The Sky Is Calling You – (MFS) Microsoft Flight Simulator – Real Snow DemoMicrosoft Flight Simulator has real-time weather features based on where …
- Albert Einstein-04The fear of death is the most unjustified of all …