Big-data analysis consists of searching for buried patterns that have some kind of predictive power. But choosing which “features” of the data to analyze usually requires some human intuition. In a database containing, say, the beginning and end dates of various sales promotions and weekly profits, the crucial data may not be the dates themselves but the spans between them, or not the total profits but the averages across those spans.
MIT researchers aim to take the human element out of big-data analysis, with a new system that not only searches for patterns but designs the feature set, too. To test the first prototype of their system, they enrolled it in three data science competitions, in which it competed against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ “Data Science Machine” finished ahead of 615.
In two of the three competitions, the predictions made by the Data Science Machine were 94 percent and 96 percent as accurate as the winning submissions. In the third, the figure was a more modest 87 percent. But where the teams of humans typically labored over their prediction algorithms for months, the Data Science Machine took somewhere between two and 12 hours to produce each of its entries.
“We view the Data Science Machine as a natural complement to human intelligence,” says Max Kanter, whose MIT master’s thesis in computer science is the basis of the Data Science Machine. “There’s so much data out there to be analyzed. And right now it’s just sitting there not doing anything. So maybe we can come up with a solution that will at least get us started on it, at least get us moving.”
Between the lines
Kanter and his thesis advisor, Kalyan Veeramachaneni, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), describe the Data Science Machine in a paper that Kanter will present next week at the IEEE International Conference on Data Science and Advanced Analytics.
Veeramachaneni co-leads the Anyscale Learning for All group at CSAIL, which applies machine-learning techniques to practical problems in big-data analysis, such as determining the power-generation capacity of wind-farm sites or predicting which students are at risk for dropping out of online courses.
“What we observed from our experience solving a number of data science problems for industry is that one of the very critical steps is called feature engineering,” Veeramachaneni says. “The first thing you have to do is identify what variables to extract from the database or compose, and for that, you have to come up with a lot of ideas.”
In predicting dropout, for instance, two crucial indicators proved to be how long before a deadline a student begins working on a problem set and how much time the student spends on the course website relative to his or her classmates. MIT’s online-learning platform MITx doesn’t record either of those statistics, but it does collect data from which they can be inferred.
Kanter and Veeramachaneni use a couple of tricks to manufacture candidate features for data analyses. One is to exploit structural relationships inherent in database design. Databases typically store different types of data in different tables, indicating the correlations between them using numerical identifiers. The Data Science Machine tracks these correlations, using them as a cue to feature construction.
For instance, one table might list retail items and their costs; another might list items included in individual customers’ purchases. The Data Science Machine would begin by importing costs from the first table into the second. Then, taking its cue from the association of several different items in the second table with the same purchase number, it would execute a suite of operations to generate candidate features: total cost per order, average cost per order, minimum cost per order, and so on. As numerical identifiers proliferate across tables, the Data Science Machine layers operations on top of each other, finding minima of averages, averages of sums, and so on.
It also looks for so-called categorical data, which appear to be restricted to a limited range of values, such as days of the week or brand names. It then generates further feature candidates by dividing up existing features across categories.
Once it’s produced an array of candidates, it reduces their number by identifying those whose values seem to be correlated. Then it starts testing its reduced set of features on sample data, recombining them in different ways to optimize the accuracy of the predictions they yield.
“The Data Science Machine is one of those unbelievable projects where applying cutting-edge research to solve practical problems opens an entirely new way of looking at the problem,” says Margo Seltzer, a professor of computer science at Harvard University who was not involved in the work. “I think what they’ve done is going to become the standard quickly — very quickly.”
It’s a pretty heavy lineup, and unlike the bad guy roster, each of these characters is a real historical figure—although probably not historically accurate. There’s Alexander Graham Bell, whose greatest invention is apparently not the telephone but a wide-range stun bomb and what appears to be some kind of ray gun; Karl Marx, the revolutionary socialist; the naturalist and evolutionist Charles Darwin; novelist Charles Dickens; Florence Nightingale, who modernized the concept of nursing; and of course Queen Victoria, for whom the Victorian Era is named.
I still haven’t quite got a grip on how the Assassin Brotherhood went from being a shadowy clan of… well, assassins, to an in-the-open gang of freedom fighters. I’m not sure what good Darwin’s complaints of, “Oh, they’re saying nasty things about me” does for the cause, either. In fact, when you get right down to it, the leaders of the bad guys seem like the sort of folks who get things done; the ones on your team look a lot more like they hope someone else will step up to the plate.
Assassin’s Creed: Syndicate comes out on November 19.
Happy Metal Gear Solid V: The Phantom Pain day! Or maybe you’re playing Mad Max? Either way, Nvidia has a driver for you. Sexily named the GeForce Game Ready 355.82 WHQL Mad Max and Metal Gear Solid V: The Phantom Pain drivers, you can download through GeForce Experience or on the GeForce site.
Apparently, you’ll get “Game Ready optimizations, a NVIDIA Control Panel Ambient Occlusion profile, and a SLI profile”. That SLI profile will let you play at 4K and 5K resolutions, and if you want to see what that looks like there’s a trailer here from NVIDIA:
The latest version of Microsoft’s main operating system is full of hidden shortcuts and behaviors.
Millions of users have upgraded to Windows 10, and now the challenge is figuring out how to use it. Microsoft’s flagship operating system combines elements of both Windows 7 and 8.1 but adds a few new places and interfaces as well. To check your network connections, for example, or to see a list of installed programs, the route may be unfamiliar. So if you’re lost in Windows 10 right now, let us draw you a map.
Navigate the new Start menu and Cortana
Windows 10’s Start menu uses elements from both Windows 7 and Windows 8. The biggest change from Windows 7 is the pane of tiles on the right-hand side. If you don’t like these, just right-click them and select Unpin From Start.
You can also “Turn live tile off.” The Twitter app, installed by default, will display a constantly updated feed that you can toggle off using “Turn live tile off.” If you want to turn off an app that is not a system app like the calendar or the Windows Store, you can uninstall it from here. If you want to use the app but you don’t want it in your Start menu, click and drag it to the desktop or taskbar.
However, you can’t create a taskbar shortcut for Cortana (Microsoft’s Siri-like search assistant). Instead, begin a search and click the circle to the left of My Stuff to access Cortana. Or just say “Hey Cortana” if you have a microphone hooked up. Soon there will be Windows 10 PCs with Intel processors that can use “Hey Cortana” to wake up from sleep mode. If you don’t want to use Cortana, it’s disabled by default, so you need take no action.
Locate programs and the Control Panel
In Windows 7, you go to Add & Remove Programs to uninstall software or to see how much space an app takes up or when you last used it. With Windows 8, Microsoft started calling this area Programs & Features, and you could search for either name to find the tool.
That’s no longer the case in Windows 10. Now you search for Apps & Features (press the Windows key and type your search query). The tool is in the System section of Windows 10’s Settings app. Right-click Apps & Features in the left-hand pane, and you get the option to create a tile with that name in Windows 10’s Start menu.
If you prefer the original Control Panel, right-click the Start menu button in the lower left-hand corner of the screen and select it from the context menu. In there you’ll find a host of tools that are no longer fully exposed to users, like Programs & Features and the Appearance and Personalization menus. Some of the icons are different, but the functions and the look are mostly intact. The Windows 10 tool for setting default apps is arguably easier to use, though (press the Windows key, select Settings, click the System icon in the upper-left, and click Default Apps in the left-hand menu). The tool sorts according to what the program does, instead of making you go through each detected program and check what it wants to do.
If you want to ditch the Control Panel for the Settings tool, Windows 10 has a new keyboard shortcut for the latter: Windows-I. Microsoft keeps an official list of all keyboard shortcuts available in Windows 10.
Windows 10 was built to be a touch-friendly operating system, but Microsoft isn’t slacking on keyboard and mouse support. Windows-Tab launches the Task View tool, which displays all your open windows at once and reveals the New Desktop option in the lower right-hand corner. Yep, Windows finally has a virtual desktop interface (VDI), but it’s fairly basic. Unlike OS X and Linux, you can’t use them to organize different sets of application shortcuts, folders, or files. You can’t apply wallpaper or color schemes that are unique to each VDI. In Windows 10, any of those things that you apply to your “real” desktop is mirrored across all the VDIs that you have created. Still, it’s a good start.
Once you’ve created a new desktop, you can switch between it and your “real” desktop by pressing Windows-Ctrl and the left or right arrow key. All open windows share your original taskbar, which makes them easier to keep track of, but things also may get squished. Create a little more real estate down there by right-clicking the taskbar, selecting Properties, checking the box next to “Use small icons,” clicking the Apply button and then OK to close the menu.
If you have multiple displays plugged in, virtual desktops may not be as useful. But you can move an application window from one display to another by pressing Windows-Shift-Left Arrow or -Right Arrow. This shortcut has actually been around since Windows 7. Oddly, you can’t use this shortcut combo to move a window from one Windows 10 VDI to another.
Tweaking the Action Center
There’s a new default icon in your system tray (in the lower right-hand corner of the desktop). It looks like a square-shaped conversation bubble with three horizontal lines inside it. This is the shortcut to your Action Center, which works like the notifications system in Android or iOS. Within it are four main shortcuts (or Quick Actions, the vague term that Windows 10 prefers). By default, they are Tablet Mode, Connect, Note, and All Settings. The Connect function handles your Wi-Fi and Ethernet interaction, and the Note function is a scratch pad. If you are signed into a Microsoft account, you’ll also see incoming email here.
You can change the four main Quick Actions, but not from within the Action Center. Instead, right-click the date and time in the lower right-hand corner of the screen and select “Customize notification icons.” This opens up the Notifications & Actions section of the Settings tool. Click one of the four Quick Action buttons to open a drop-down menu listing other shortcuts.
OneDrive, formerly known as SkyDrive, is Microsoft’s cloud storage competitor to Google Drive and iCloud. Its cloud-shaped icon will appear by default in your system tray, because it’s set to start automatically when you load Windows. If you don’t care about OneDrive, stop this behavior by right-clicking the cloud icon, clicking Settings and the Settings tab (the window doesn’t default to this tab), unchecking the box next to “Start OneDrive automatically when I sign into Windows,” and clicking OK to confirm your changes. To close OneDrive manually, right-click the icon, select Exit, and click one the Close OneDrive button to confirm.
Side note: OneDrive is not an ideal cloud storage service, because it doesn’t offer client-side encryption. Instead, the service keeps a copy of your encryption keys, so technically Microsoft can look at your files (unless you’ve pre-encrypted them with a third-party program or service) or hand those keys over to anyone with the legal power to seize them — all without your knowledge. Most cloud storage services, including iCloud and Google Drive, keep a copy of your encryption keys. If you want a service that lets you keep those keys to yourself, check out our roundup of cloud storage services.
If you want to change how other icons show up in the system tray, return to Notifications & Actions and click the link labeled “Select which icons appear on the taskbar.” You’ll see a list of icons that you can toggle on and off with a slider. This is just the first batch of icons; to reset the rest of them, click the back arrow in upper left-hand corner of the Settings Window and click the link labeled “Turn system icons on or off.” There’s no Apply or OK button. Instead, your changes are saved right away, automatically.
System designed to label visual scenes according to type turns out to detect particular objects, too.
Object recognition — determining what objects are where in a digital image — is a central research topic in computer vision.
But a person looking at an image will spontaneously make a higher-level judgment about the scene as whole: It’s a kitchen, or a campsite, or a conference room. Among computer science researchers, the problem known as “scene recognition” has received relatively little attention.
Last December, at the Annual Conference on Neural Information Processing Systems, MIT researchers announced the compilation of the world’s largest database of images labeled according to scene type, with 7 million entries. By exploiting a machine-learning technique known as “deep learning” — which is a revival of the classic artificial-intelligence technique of neural networks — they used it to train the most successful scene-classifier yet, which was between 25 and 33 percent more accurate than its best predecessor.
At the International Conference on Learning Representations this weekend, the researchers will present a new paper demonstrating that, en route to learning how to recognize scenes, their system also learned how to recognize objects. The work implies that at the very least, scene-recognition and object-recognition systems could work in concert. But it also holds out the possibility that they could prove to be mutually reinforcing.
“Deep learning works very well, but it’s very hard to understand why it works — what is the internal representation that the network is building,” says Antonio Torralba, an associate professor of computer science and engineering at MIT and a senior author on the new paper. “It could be that the representations for scenes are parts of scenes that don’t make any sense, like corners or pieces of objects. But it could be that it’s objects: To know that something is a bedroom, you need to see the bed; to know that something is a conference room, you need to see a table and chairs. That’s what we found, that the network is really finding these objects.”
Torralba is joined on the new paper by first author Bolei Zhou, a graduate student in electrical engineering and computer science; Aude Oliva, a principal research scientist, and Agata Lapedriza, a visiting scientist, both at MIT’s Computer Science and Artificial Intelligence Laboratory; and Aditya Khosla, another graduate student in Torralba’s group.
Under the hood
Like all machine-learning systems, neural networks try to identify features of training data that correlate with annotations performed by human beings — transcriptions of voice recordings, for instance, or scene or object labels associated with images. But unlike the machine-learning systems that produced, say, the voice-recognition software common in today’s cellphones, neural nets make no prior assumptions about what those features will look like.
That sounds like a recipe for disaster, as the system could end up churning away on irrelevant features in a vain hunt for correlations. But instead of deriving a sense of direction from human guidance, neural networks derive it from their structure. They’re organized into layers: Banks of processing units — loosely modeled on neurons in the brain — in each layer perform random computations on the data they’re fed. But they then feed their results to the next layer, and so on, until the outputs of the final layer are measured against the data annotations. As the network receives more data, it readjusts its internal settings to try to produce more accurate predictions.
After the MIT researchers’ network had processed millions of input images, readjusting its internal settings all the while, it was about 50 percent accurate at labeling scenes — where human beings are only 80 percent accurate, since they can disagree about high-level scene labels. But the researchers didn’t know how their network was doing what it was doing.
The units in a neural network, however, respond differentially to different inputs. If a unit is tuned to a particular visual feature, it won’t respond at all if the feature is entirely absent from a particular input. If the feature is clearly present, it will respond forcefully.
The MIT researchers identified the 60 images that produced the strongest response in each unit of their network; then, to avoid biasing, they sent the collections of images to paid workers on Amazon’s Mechanical Turk crowdsourcing site, who they asked to identify commonalities among the images.
“The first layer, more than half of the units are tuned to simple elements — lines, or simple colors,” Torralba says. “As you move up in the network, you start finding more and more objects. And there are other things, like regions or surfaces, that could be things like grass or clothes. So they’re still highly semantic, and you also see an increase.”
According to the assessments by the Mechanical Turk workers, about half of the units at the top of the network are tuned to particular objects. “The other half, either they detect objects but don’t do it very well, or we just don’t know what they are doing,” Torralba says. “They may be detecting pieces that we don’t know how to name. Or it may be that the network hasn’t fully converged, fully learned.”
In ongoing work, the researchers are starting from scratch and retraining their network on the same data sets, to see if it consistently converges on the same objects, or whether it can randomly evolve in different directions that still produce good predictions. They’re also exploring whether object detection and scene detection can feed back into each other, to improve the performance of both. “But we want to do that in a way that doesn’t force the network to do something that it doesn’t want to do,” Torralba says.
“Our visual world is much richer than the number of words that we have to describe it,” says Alexei Efros, an associate professor of computer science at the University of California at Berkeley. “One of the problems with object recognition and object detection — in my view, at least — is that you only recognize the things that you have words for. But there are a lot of things that are very much visual, but maybe there aren’t easy describable words for them. Here, the most exciting thing for me would be that, by training on things that we do have labels for — kitchens, bathrooms, shops, whatever — we can still get at some of these visual elements and visual concepts that we wouldn’t even be able to train for, because we can’t name them.”
“More globally,” he adds, “it suggests that even if you have some very limited labels and very limited tasks, if you train a model that is a powerful model on them, it could also be doing less limited things. This kind of emergent behavior is really neat.”
Enterprises will be able to give their most important iOS apps priority and route voice calls over their own networks through the partnership that Cisco Systems and Apple announced on Monday.
The deal reflects a recognition that mobile devices and apps are replacing traditional IT in many enterprises. About 30 percent of voice calls in business today are mobile, Cisco says. The companies want to combine mobile and traditional enterprise technologies to help people work better. But they’re not saying when that vision’s going to hit the streets.
Cisco and Apple can integrate mobile devices and apps more tightly with enterprise networks because each company supplies both hardware and software, according to Rowan Trollope, senior vice president of Cisco’s collaboration group. “We can move beyond what just a normal app developer could do,” he said.
The companies haven’t said when they’ll deliver on the partnership, but the results could be broad in scope. They’re looking at better collaboration capabilities, closer integration between iPhones and office phones, and tighter enterprise control over mobile traffic, according to Trollope.
Apple has been pushing for enterprise credibility just as established business IT companies face the onslaught of consumer mobile devices like iPhones and iPads and the free Internet-based apps that run on them. The deal it announced with IBM last year has already produced a host of iOS apps geared toward specific industries.
The latest partnership may help Apple more than it does Cisco, according to analyst Avi Greengart of Current Analysis. Having the biggest supplier of network equipment show favor toward iPhones and iPads could steer enterprises toward Apple devices, particularly a business-focused version of the iPad that Greengart believes Apple may be developing.
On the other side, Apple also brings a hip factor that’s in short supply at Cisco, which left the consumer market several years ago to focus on less glamorous technologies behind the scenes in enterprise and service-provider infrastructure. Apple’s rock-star CEO Tim Cook joined Cisco Executive Chairman John Chambers on stage at Cisco’s global sales conference in Las Vegas to announce the deal on Monday.
A key part of the companies’ plan is to bring iPhone business calls onto corporate networks, where they can be tracked and logged the way calls from desk phones are now for purposes like security and regulatory compliance. This kind of integration hasn’t been possible before, Trollope said. Users can better count on good connections over a private network than on a typical cellular network, too, he said, though the companies also plan to bring benefits to carrier networks.
There are at least a couple of ways Cisco says the partners can boost mobile performance for iOS devices in the workplace. For one thing, they will be able to prioritize data traffic by application. For example, on a hospital network, a doctor’s videoconference with a patient on an iPad would get priority over a cat video being sent by a patient in the next room, so the videoconference would stream normally.
There will also be ways to detect and streamline demanding data flows on the network, like big software updates or content that every student in a classroom has to download. Those could involve caching the content in storage that’s built into the network near the users requesting it, Trollope said. Keeping data nearby cuts down on the number of packets going through routers and switches deeper in the network.
The partnership may also make the infrastructure already in offices, like desk phones and speaker phones, more useful through Apple devices. For example, users may someday be able to make a call on a speaker phone just by tapping on a contact’s number on an iPhone rather than entering the number all over again on the speaker phone.
Cisco also plans to develop experiences in its collaboration tools, such as Spark, Telepresence and WebEx, that are optimized for iOS.
Windows 10 is an entirely new version of the veteran Windows operating system – a version that is make-or-break for Microsoft.
Even though Windows 8.1 did improve things, there’s no escaping that with Windows 8, Microsoft was hugely complacent, buoyed by the success of Windows 7. It drastically misunderstood its users with a fundamentally changed user interface which didn’t make any logical sense and was hard to learn. It failed us. It failed itself.
Thankfully 2015 Microsoft is pretty different to 2012 Microsoft. The key management of the corporation has changed. It has woken up to the fact that people can choose other operating systems. It’s keen on making stuff for OS X, Linux, iOS and Android. As you’ll hear, it’s allowing apps from other platforms to be easily ported to Windows, too.
Microsoft believes the future of Windows is as a platform for all. Like Android, the strength of Windows is in the thousands of companies that develop for it (see the section about Universal apps for more on the relationship with developers) and use it in their products.
That’s why Windows 10 is no longer just an operating system for 32 and 64-bit PCs. It will also run on the ARM platform for smaller tablets and smartphones. Windows 10 is going to run on phones – it’s the new version of Windows Phone, but it’s not that clear whether Microsoft will brand new Windows Phones as ‘Windows 10’ or not. If you know what Windows RT was, then don’t worry, because it’s nothing like that.
Universal apps will run not only on PCs, but on Windows 10 phones, Windows 10 for IoT devices and Xbox as well.
Like Windows XP, Vista, 7 and 8 before it, Windows 10 is part of the Windows NT family.
From the Windows 10 Preview to RTM
We’ve been part of the Windows Insider program, which has given people early access to Windows 10 through various phases of its development. The latest version, which this article is based on, is known as build 10240, made available on 15 July. It is the RTM- or Release to Manufacturing – version. RTM will also be on Windows 10 PCs.
RTM doesn’t have the usual ‘Windows 10 Insider Preview’ text on the desktop, and it has also been released to everybody in the Windows Insider program – even those who didn’t want the latest updates (the ‘slow’ ring as opposed to the ‘fast’ ring).
Even after Windows 10’s release, the Windows Insider program will continue, and Microsoft will release Windows 10 updates to members of the program first.
While it’s natural that Windows 10 will be considered as ‘finished’ by reviewers (us) and consumers in the coming weeks, Microsoft doesn’t subscribe to this point of view, and says it will carry on developing the OS with additional tweaks.
- Resume Full name Sayed Ahmadreza Razian Nationality Iran Age 36 (Sep 1982) Website ahmadrezarazian.ir Email ...
- معرفی نام و نام خانوادگی سید احمدرضا رضیان محل اقامت ایران - اصفهان سن 33 (متولد 1361) پست الکترونیکی email@example.com درجات علمی...
- Nokte – نکته نرم افزار کاربردی نکته نسخه 1.0.8 (رایگان) نرم افزار نکته جهت یادداشت برداری سریع در میزکار ویندوز با قابلیت ذخیره سازی خودکار با پنل ساده و کم ح...
- Tianchi-The Purchase and Redemption Forecasts 2015 Special Prize – Tianchi Golden Competition (2015) “The Purchase and Redemption Forecasts” in Big data (Alibaba Group) Among 4868 teams. Introd...
- Tianchi-Brick and Mortar Store Recommendation with Budget Constraints Ranked 5th – Tianchi Competition (2016) “Brick and Mortar Store Recommendation with Budget Constraints” (IJCAI Socinf 2016-New York,USA)(Alibaba Group...
- Drowning Detection by Image Processing In this research, I design an algorithm for image processing of a swimmer in pool. This algorithm diagnostics the swimmer status. Every time graph sho...
- Shangul Mangul HabeAngur Shangul Mangul HabeAngur (City of Goats) is a game for child (4-8 years). they learn how be useful in the city and respect to people. Persian n...
- 1st National Conference on Computer Games-Challenges and Opportunities 2016 According to the public relations and information center of the presidency vice presidency for science and technology affairs, the University of Isfah...
- 3rd International Conference on The Persian Gulf Oceanography 2016 Persian Gulf and Hormuz strait is one of important world geographical areas because of large oil mines and oil transportation,so it has strategic and...
- 2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 Faculty of Nursing and Midwifery – University of Isfahan – 2 Aug 2016 - Ass...
- My City This game is a city simulation in 3d view. Gamer must progress the city and create building for people. This game is simular the Simcity.
- Optimizing raytracing algorithm using CUDA Abstract Now, there are many codes to generate images using raytracing algorithm, which can run on CPU or GPU in single or multi-thread methods. In t...
- Deep Learning for Computer Vision with MATLAB and cuDNN Deep learning is becoming ubiquitous. With recent advancements in deep learning algorithms and GPU technology...
- AMD Ryzen Downcore Control AMD Ryzen 7 processors comes with a nice feature: the downcore control. This feature allows to enable / disabl...
- کودا – CUDA کودا به انگلیسی (CUDA) که مخفف عبارت انگلیسی Compute Unified Device Architecture است یک سکوی پردازش موازی و مد...
- Head-mounted Displays (HMD) Head-mounted displays or HMDs are probably the most instantly recognizable objects associated with virtual rea...
- Using Machine Learning to Optimize Warehouse Operations With thousands of orders placed every hour and each order assigned to a pick list, Europe’s leading online fas...
- Unity – What’s new in Unity 5.3.4 The Unity 5.3.4 public release brings you a few improvements and a large number of fixes. Read the release not...
- Unity – What’s new in Unity 5.3.3 The Unity 5.3.3 public release brings you a few improvements and a large number of fixes. Read the release not...
- Automatic Colorization Automatic Colorization of Grayscale Images Researchers from the Toyota Technological Institute at Chicago and University of Chicago developed a fully aut...
- Real-Time Pedestrian Detection using Cascades of Deep Neural Networks Google Research presents a new real-time approach to object detection that exploits the efficiency o...
- IBM Watson Chief Technology Officer Rob High to Speak at GPU Technology Conference Highlighting the key role GPUs will play in creating systems that understand data in human-like ways, Rob High...
- Diagnosing Cancer with Deep Learning and GPUs Using GPU-accelerated deep learning, researchers at The Chinese University of Hong Kong pushed the boundaries...
- About CUDA – More Than A Programming Model The CUDA compute platform extends from the 1000s of general purpose compute processors featured in our GPU's c...
- NVIDIA TITAN Xp vs TITAN X NVIDIA has more or less silently launched a new high end graphics card around 10 days ago. Here are some pictu...
- ASUS GeForce GTX 1080 TURBO Review This GTX 1080 TURBO is the simplest GTX 1080 I tested. By simplest, I mean the graphics card comes with a simp...
- IBM Watson Chief Technology Officer Rob High to Speak at GPU Technology ConferenceHighlighting the key role GPUs will play in creating systems …
- Moodbox First Emotionally Intelligent Speaker Trained on GPUsCreated by researchers at the Hong Kong University of Science …
- Head-mounted Displays (HMD)Head-mounted displays or HMDs are probably the most instantly recognizable …
- Introducing Parker, NVIDIA’s Newest SOC for Autonomous VehiclesNVIDIA today took the cloak off Parker, our newest mobile …
- Virtual 3D Teleportation in Real Time with NVIDIA GPUsImagine being able to virtually teleport from one space to …
- Forget Spellcheck. Deep Learning Can Fix Your GrammarFrom self-driving cars to environment-sensing robots, deep learning is tackling …
- Virtual 3D Teleportation in Real Time with NVIDIA GPUsImagine being able to virtually teleport from one space to …
- What is Direct3D 12?DirectX 12 introduces the next version of Direct3D, the 3D …
- Unreal Engine 4.10 Release NotesThis release brings hundreds of updates for Unreal Engine 4, …
- GPU-Accelerated Supercomputer Targets TumorsA team of researchers from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) research …