Personal Profile

Medic

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Researchers at The Australian National University are using deep learning and NVIDIA technologies to better understand the progression of Parkinson’s disease.

Currently it is difficult to determine what type of Parkinson’s someone has or how quickly the condition will progress.
The study will be conducted over the next five years at the Canberra Hospital in Australia and will involve 120 people suffering from the disease and an equal number of non-sufferers as a controlled group.

“There are different types of Parkinson’s that can look similar at the point of onset, but they progress very differently,” says Dr Deborah Apthorp of the ANU Research School of Psychology. “We are hoping the information we collect will differentiate between these different conditions.”

Researchers Alex Smith (L) and Dr Deborah Anthrop (R) work with Parkinson’s disease sufferer Ken Hood (middle).

Researchers Alex Smith (L) and Dr Deborah Anthrop (R) work with Parkinson’s disease sufferer Ken Hood (middle).

Dr Apthorp said the research will measure brain imaging, eye tracking, visual perception and postural sway.

From the data collected during the study, the researchers will be using a GeForce GTX 1070 GPU and cuDNN to train their deep learning models to help find patterns that indicate degradation of motor function correlating with Parkinson’s.

The researchers plan to incorporate virtual reality into their work by having the sufferers’ wear head-mounted displays (HMDs), which will help them better understand how self-motion perception is altered in Parkinson’s disease, and use stimuli that mimics the visual scene during self-motion.

“Additionally, we would like to explore the use of eye tracking built into HMDs, which is a much lower cost alternative to a full research eye tracking system and reduces the amount of equipment into a highly portable and versatile single piece of equipment,” says researcher Alex Smith.

New Research: Video Games Help Children’s Psychological and Academic Development

New research suggests that video games have a positive effect on children‘s development.

Thirteen researchers from Columbia University and Paris Descartes University found that elementary-age children who played video games for five or more hours per week – about 20 percent of the children surveyed – had fewer psychological problems and higher overall academic performance than their peers who did not play video games. In fact, the game players were described by teachers as better students, both academically and in social adjustment.

New Research: Video Games Help Children’s Psychological and Academic Development

New Research: Video Games Help Children’s Psychological and Academic Development

Part of the School Children Mental Health Europe project, the report analyzed the video game usage, academic performance, and behavior of nearly 3,200 European children between the ages of six and 11. Observations and data collected by parents and teachers were also considered to help guide researchers.“I think what we’re seeing here is the evolution of gaming modern society. Video games are now a part of a normal childhood,” Katherine Keyes, one of the 13 authors of the study, told U.S. News. “What we’re seeing here is that kids who play a lot of video games are socially integrated, they’re prosocial, they have good school functioning and we don’t see an association with adverse mental health outcomes.”

As the research concludes, video games provide educational, social, and psychological benefits for children.

You can read the entire study online in the journal Social Psychiatry and Psychiatric Epidemiology.

An Introduction to Virtual Reality

What is Virtual Reality? Virtual Reality is a set of computer technologies which, when combined, provide an interface to a computer-generated world, and in particular, provide such a convincing interface that the user believes he is actually in a three dimensional computer-generated world. This computer generated world may be a model of a real-world object, such as a house; it might be an abstract world that does not exist in a real sense but is understood by humans, such as a chemical molecule or a representation of a set of data; or it might be in a completely imaginary science fiction world.

A key feature is that the user believes that he is actually in this different world. A second key feature of Virtual Reality is that if the human moves his head, arms or legs, the shift of visual cues must be those he would expect in a real world. In other words, besides immersion, there must be navigation and interaction.


1. Computer mediated sensing

Different kinds of VE technology support different modes of interaction.

  • One kind of VE technology employs subjective immersion, in which the user interacts as if using an ordinary desktop computer system. The user views the system from the usual close but remote position and interacts through standard or special-purpose input or control devices such as keyboards, mouse controls, trackballs, joysticks, or force balls. Three dimensions are represented on 3D displays through the use of simulation software employing perspective, object rotation, object interposition, relative size, shading, etc.
  • The other kind of VE technology uses spatial immersion. The user is required to get inside the virtual space by wearing special equipment, typically at least a helmet mounted display that bears sensors to determine precise helmet position within the VE system’s range, in order to interact with the simulated environment. The user is thus immersed in a quasi-3D virtual space in which objects of interest appear to exist and events occur above, below, and around in all directions toward which the user turns his or her head.

Here follows a description of the typical hardware needed to run a virtual reality system. It will later be discussed whether it’s advisable to maintain all of these components when trying to implement a VE on a PC. What is important here is to focus on a standard architecture, as it is usually described in literature.

Virtual Reality is often used as comprehensive term to describe the use of 3-D graphics displays to explore a computer generated world. This interaction between man and machine can happen according to different styles that are representing the actual possibility and potential of the technology. The different styles of interaction depend upon the way the virtual environment is represented. We can identify at least six interaction styles that refer to the way the simulated/virtual environment is represented: desktop, projected, immersive, Cave, telepresence, augmented.

1) Desktop VR

The most popular type and is based upon the concept that the potential user interacts with the computer screen without being fully immersed and surrounded by the computer-generated environment. The feeling of subjective immersion can be improved through stereoscopic vision (i.e., CrystalEyes) and operative action with interface can be guaranteed via pointing devices (mouse, joystick) or typical VR peripherals such as Dataglove. Desktop VR is used mainly in games but professional application are currently widely diffused. Example of professional application domains come from general industrial design, engineering, architecture and the visualisation of data streams. The main benefit of desktop VR is its limited cost and less involving use of interacting technology, as a matter of fact according to different scenarios of use it might be more appropriate a less “invasive” device such as a CRT monitor than a wired HMD. It seems that desktop VR is particularly successful with the inspection of sample objects as opposed to immersed VR where the best exploitation is with the exploration of spaces. Up to date CAD/CAM systems slowly shifted in their performance towards the quality of VR interaction when they allowed the user to manipulate 3-d objects as if they were real.

2) Projected VR

This is technological solution often seen in VR-Art shows and in VR leisure applications. It is based upon the overlapping of the image of the real user on the computer generated world. That is to say that the user can see his image overlaid the simulated environment. A special movement tracking device can capture the movements of the user and insert them so that they can cause actions and re-actions in the virtual world.

3) Immersive VR

With this type of solution the user appears to be fully inserted in the computer generated environment. This illusion is rendered by providing HMD, with 3-D viewing and a system of head tracking to guarantee the exact correspondence and co-ordination of user’s movements with the fee-back of the environment.

4) CAVE

Cave is a small room where a computer generated world is projected on the walls. The projection is made on both front and side walls. This solution is particularly suitable for collective VR experience because it allows different people to share the same experience at the same time. It seems that this technological solution is particularly appropriate for cockpit simulations as it allows the views from different sides of a imaginary vehicle.

5) Telepresence

Users can influence and operate in a world that is real but in a different location. The users can observe the current situation with remote cameras and achieve actions via robotic and electronic arms. Telepresence is used for remote surgical operations and for the exploration/manipulation of hazardous environments (i.e., space, underwater, radioactive.

Virtual Reality is the product of a trick. The VR system tricks the user into believing that the Virtual Environment by which he feels himself surrounded is the actual, real environment. This is made possible by several different devices, each with its own technology, which produce each a specific aspect of the VE, relevant for a specific sense. We will discuss hardware relevant for the three senses which are to be immersed in the VE: sight, touch and hearing.

6) Augmented

This VR solution is an invasive strategy towards reality. As a matter of fact user’s view of the world is supplemented with virtual objects and items whose meaning is aimed at enriching the information content of the real environment. In military applications for instance vision performance is enhanced by providing the pictograms that anticipate the presence of other entities out of sight.


2. VR market analysis

In the Information Technology trend, Virtual Reality has been identified as one of the most promising development areas. As it happens with all the innovative applications this new technology is not excluded from the generation of problems and concerns regarding its implementation in operative working domains. Yet we are witnessing a constant improvement in marketing perspective of both quality of applicative VR systems and receptiveness of potential customers. This is due to mainly three reasons: (1) the decrease of the cost of VR systems and devices (2) the constant improvement of performance reliability of the technology, (3) the extremely valuable economic benefits derived from VR use in its various forms and purposes (training, simulation, design). So we can affirm the consolidation of a class of technology that can positively be stated as “virtual reality” and appraised like any other novel high tech industry. This technology has been confidently adopted in a number of markets, and has the potential to penetrate in many more.

The VR market is at present immature, without any clear market leaders or clear segmentation of activities. In a recent paper prepared for the European Commission’s IT Policy Analysis Unit (DG III/A.5) on VR, PVN (Belgium) estimates a market of $570 million (MECU 483) by 1998. This figure includes both hardware and software. The bad news for Europe is that it is forecast to have only $115 million (MECU 97) of that market, a poor third behind the USA and Japan.
A study into telematics applications of virtual environments, carried out by Sema Group (F), Fraunhofer IAO (D) and MIT’s Research Laboratory for Electronics (USA) for the Commission’s DG XIII/C in 1994, predicted a market evaluation of “roughly MECU 400 – MECU 500 by 1998” with a growth rate “very high, approaching 70-80% per year”. What is perhaps less disputed is that the major market activity is in entertainment equipment.

Frost & Sullivan’s 1994 VR market report stated that about 250 companies existed in the USA and only 25 in other countries which claim to make even part of their revenue from VR. Of these, no one firm earned more than $10 million (MECU 8.4) from VR alone. A recent Financial Times Report listed four types of commercial VR company – software companies, component manufacturers, system companies and ‘other industry participants’. As might be expected, the vast majority of such companies are US-based. Only two European company, Superscape and Division of the UK, is listed under software companies and only one European Company, Virtuality, is listed under component manufacturers.

Although this listing was not ranked and was definitely not exhaustive, most activity does seem to be taking place in the USA. The wider availability of venture capital and the tendency of small firms to ‘spin off’ from others may account in part for this.

According to the recent (Jan. 96) Business Communications Company, Inc. report “RGB-175/The Virtual Reality Business”, by 1996, more than 300 companies will settle sales for about $255 million worth of VR products and services and behind this figures lay as VR customers many multinational brands of military and medical products. By 2000, the VR industry will be posting annual sales of over $1 billion and reaching an annual average growth rate (MGR) of 33%.

In July of 1996 Ovum, the UK market research company published another survey on Virtual Reality (VR) markets: ‘Virtual Reality: Business Applications, Markets and Opportunities. Ovum expects the ‘killer application’ of VR to be in 3D interfaces to the Internet, used for promoting products and services on the World Wide Web (WWW). It predicts that in the next five years, VR will be widely used as a GUI (graphical user interface) for standard business software, thus replacing icon-based GUIs for such applications as database, business systems and networked management software. According to the survey, a large proportion of companies polled indicated that they would use PC based VR training applications for their employees.

Regarding the present uptake of VR in business, the report concludes that ?companies are finding virtual reality an important source of competitive advantage? and that ?although some companies are taking their time to evaluate VR, which is slowing down the speed of market lift-off, many are reporting significant benefits and are increasing their use of VR technology.? It explains this expected increase in uptake by saying that ?In many cases, companies have made cost savings of over US$1 million. They have experienced faster time to market, fewer mistakes than when using CAD technologies, greater efficiency in working methods and improved quality in final products.?

The report predicts that the VR market will grow from US$134.9 million in 1995 to just over US$1 billion by the year 2001 and that the largest growth sector will be in the software sector with a 58 per cent annual growth in this period.

Another significant finding of the report is that the business market for VR in 1995 represented 65 per cent of the total, with entertainment applications accounting for only 35 per cent. VR is normally seen to be of major significance to the games market?. it is not known whether, and how, the authors distinguish between entertainment and ?the entertainment business?.

The Ovum survey foresees a radical shift in how companies will be using VR between now and the year 2001. Today the majority of VR applications are in design automation: virtual prototyping, interior design and ergonomics, and architectural and engineering design. Expensive, workstation-based systems currently dominate, accounting for 43 per cent of the market. By 2001, however, PC-based VR technology will account for 46 per cent of the business market, where most of the applications will be non-immersive, using computer screens instead of headsets.

Virtual Reality Market Forecasts by Application ($ millions, constant 1995 )

1994 1995 2000 AAGR% 1995-2000
Instructional & Developmental 70 95 355 31
Design & Development VR 25 30 150 40
Entertainment VR 60 110 500 35
Medical Treatment VR 10 20 50 20
Total 165 255 1055 33
Source: Business Communications Company, Inc., GB-175, The Virtual Reality Business, 1996

Applicative domains and major marketing areas

At the current state of the situation all marketing experts converge on the fact that the major market activity is entertainment equipment: leisure technology uses account for the largest VR market value, and are foreseen to continue growing at a 35% AAGR to the year 2000 (see table). The critical mass in marketing terms will be reached with high-scale produced single-user entertainment VR system, this will be the propelling force pushing the market growth from a current 1995 value of $110 million to $500 million by year 2000.

Home and entertainment

The great market expansion is expect for site- based entertainment. This expectation is based upon the evaluation two factors: the low saturation, and dramatic decrease of prices. This phenomena will allow VR technology to be used by all facets of society, including commercial/industrial, the government, military, and university and secondary schools at a stage not comparable with any previous existing situation. A great role will also be covered with in the support to education in general, for instance the instructional and developmental market is expected to widen its share from a $95 million 1995 market figure to $355 million by 2000, resulting in an AAGR of 31%. The dimension of this increase will affect technical/engineering colleges and universities, and the “developmental” VR includes spending on advanced, but as yet non-commercial applications, along with pure science and research systems not included in the other categories.

Industrial and Scientific Design

Applications of design and development VR market are in engineering, architecture and chemical design and development a constant shift will bring performance of CAD/ CAMM application to the standards of Virtual Reality applications . This market will grow from a 1995 market value of $30 million, to $150 million by 2000, reaching an AAGR of 40%. Medical treatment VR market will also sustain growth. The 1995 market value of $20 million is projected to reach $50 million by 2000, reaching a 20% AAGR.

The searching for common standards

Current VR products employ proprietary hardware and software. There is little doubt that incompatibility between different systems is restricting market growth at present. It is probable that as the market matures, certain de facto standards will emerge, perhaps when major players become involved. It is probable that the VR market will follow the route of the real-time financial information markets which found that adopting an open systems approach did not damage sales, as had been feared, but helped encourage the growth of the marketplace. According to the IMO – Information Group at Policy Studies Institute, London (August 95 – VIRTUAL REALITY: THE TECHNOLOGY AND ITS APPLICATIONS), “in the future an open systems approach will emerge for VR as well”. At that point, the market is likely to expand considerably.

However, the cost of VR equipment is falling rapidly. For example, headgear prices have already fallen from hundreds of thousands of dollars to $200 (ECU 169), and basic VR software packages are available commercially for $100 (ECU 85), or can be downloaded from the Internet. Simple VR games software is available in the USA for $70 (ECU 59).


3. VR in Europe

The seminal efforts that gave rise to VR took place in the US. Funding from EC organisations has been slower in coming than in the US, where the Office of Naval Research, National Science Foundation, and Advanced Research Projects Agency now fund VR research and the National Aeronautics and Space Administration has been a long-time developer. This situation is perhaps attributable to the large cost associated with VR until quite recently. However the importance of VR is clearly understood in Europe and progress is now going forward across the entire spectrum of virtual reality, with special emphasis on industrial and commercial applications.

Europe encompasses various countries and cultures, and acceptance of the importance of VR has not been uniform. Interest by British Aerospace, the presence of the parallel processing company Inmos (makers of the Transputer), and early funding by the Department for Trade and Industry are cited by UK researchers as factors that drove research in the UK in the mid-to-late 1980s. This resulted in technology transfer that has produced several successful commercial efforts. More recently, German laboratories and institutions have become active in applying immersion technology to a broad range of applications. France has several of Europe’s leading research institutions for machine vision, robotics, and related technologies that affect VR, but has been less active in developing systems that provide interactive immersion. Most other West European countries have some VR R&D.

In the last two years, the EC organised several events to evaluate VR as a topic for the next research initiative. Recently EC presented one study titled: “Telematics applications of VE – The use of Virtual Environment Techniques in the Application of Telematics to Health Care, Transport, Training and the Disabled and Elderly”. This study was the third activity in a row starting with a workshop in March 1993 in Brussels in which was tried to make some kind of a status report and start the process of gathering recommendations on how to incorporate VR in future EC programmes. The second activity was a small report creating the basis for a larger study, which finally was carried out by a team from Fraunhofer Institute, SEMA Group and MIT.

The “Telematics Application” shows a small section on VE technologies, VE applications (generic use of VE technological capabilities, evaluation of the market) and treats then each of the mentioned fields (Education/Training, Transport, Health Care, and Elderly and Handicapped) and finishes of with potential actions for the TAP programme. In the health area the reports states: “In effect, the objective is to validate the 3-D approaches of VE, and evaluate their benefits for future health care systems. In parallel, other projects aimed at providing basic building blocks for future uses in VE- based medical applications are also of interest. They concern digital and computational models of the human body or critical organs”. The report stresses the use of VE in minimally invasive surgery, surgical decision support and training of surgeons, doctors and students. It also finds a use in evaluation of human interfaces and other factors in the design of critical components of new health care facilities.

EC funded projects/working groups relevant to VREPAR

The European Strategic Program for Research and Development (Esprit II) funded a handful of ongoing VR projects. Glad-in-Art is developing a glove-exoskeleton interface system to manipulate virtual objects, while SCATIS intends to integrate room acoustics into virtual worlds, and Humanoid concentrates on the development and simulation of virtual humans.

The call for proposals for Esprit III did not include a specific VR component. However, VR was explicitly mentioned within the basic research and multimedia components (two of the seven program areas). Between the funded studies we remember FIVE (Framework for Immersive Virtual Environments).

Other VR projects deal with Virtual Environment on Multi-Modal Interfaces (MIAMI and VETIR). VETIR deals with the use of virtual environment technologies in motor disabilities’ rehabilitation technology Initiative for Disabled and Elderly People.


4. Medical Applications of VR

Three important aspects of virtual reality systems offer new possibilities to medical treatment:

  • How They Are Controlled
    Present alternate computer access systems accept only one or at most two modes of input at a time. The computer can be controlled by single modes such as pressing keys on a keyboard, pointing to an on-screen keyboard with a head pointer, or hitting a switch when the computer presents the desired choice, but present computers do not recognize facial expressions, idiosyncratic gestures, or monitor actions from several body parts at a time. Most computer interfaces accept only precise, discrete input. Thus many communicative acts are ignored and the subtleness and richness of the human communicative gesture are lost. This results in slow, energy-intensive computer interfaces. Virtual reality systems open the input channel: the potential is there to monitor movements or actions from any body part or many body parts at the same time. All properties of the movement can be captured, not just contact of a body part with an effector.
    Given that these actions are monitored, why can the user control more in the virtual world than in the real world? In the virtual environment these actions or signals can be processed in a number of ways. They can be translated into other actions that have more effect on the world being controlled, for example, virtual objects could be pushed by blowing, pulled by sipping, and grasped by jaw closure. Proportional properties such as force, direction, and speed could become interchangeable allowing the person with arthritic joints to push something harder, without the associated pain, by simply moving faster. They could be filtered to achieve a cleaner signal. Actions can be amplified thus movement of the index finger could guide a tennis racket. Alternately movements could be attenuated giving the individual with large, poorly controlled movement more precise control of finer actions.
  • Feedback
    Because VR systems display feedback in multiple modes, feedback and prompts can be translated into alternate senses for users with sensory impairments. The environment could be reduced in size to get the larger or overall perspective (without the “looking through a straw effect” usually experienced when using screen readers or tactile displays). Objects and people could show speech bubbles for the person who is deaf. Sounds could be translated into vibrations or into a register that is easier to pick up. Environmental noises can be selectively filtered out. The user with a spinal cord injury with no sensation in her hands could receive force and density feedback at the shoulder, neck, or head.
    For the individual multimodal feedback ensures that the visual channel is not overloaded. Vision is the primary feedback channel of present-day computers; frequently the message is further distorted and alienated by representation through text. It is very difficult to represent force, resistance, density, temperature, pitch, etc., through vision alone. Virtual reality presents information in alternate ways and in more than one way. Sensory redundancy promotes learning and integration of concepts.
  • What Is Controlled
    The final advantage is what is controlled. Until the last decade computers were used to control numbers and text by entering numbers and text using a keyboard. Recent direct manipulation interfaces have allowed the manipulation of iconic representations of text files or two dimensional graphic representations of objects through pointing devices such as mice (Brownlow, 1989). The objective of direct manipulation environments was to provide an interface that more directly mimics the manipulation of objects in the real world. The latest step in that trend, virtual reality systems, allows the manipulation of multisensory representations of entire environments by natural actions and gestures. This last step may make accessible valuable experiences missed due to physical or sensory impairments. These experiences may include early object-centered play, and early independent mobility.
    In virtual environments we can simulate inaccessible or risky experiences, allowing the user to extract the lessons to be learned without the inherent risk. Virtual reality systems can allow users to extend their world knowledge.

According to an assessment on current diffusion of VR in the medical sector, gathered by the Gartner Group, forecast of VR future in this area are quite promising. Within the medical application its strategic relevance will increase and gain importance. It is envisaged that by year 2000 despite possible technological barriers, virtual reality techniques will be integrated in endoscopic surgical procedures. VR will affect also the medical educational strategy for students as well as experienced practitioners, who will increasingly be involved in immersive simulated techniques. It is expected that these educational routines can become of routine by year 2005.

VR has been until now widely underused, probably because of prohibitive hardware costs, nevertheless this technology is pushing forward new challenges and advances that will materialise by year 2000. The medical use of VR will take place mainly in four domains:

  • teaching: VR will reproduce environments or special conditions that will enable to educate medical personnel.
  • simulation: VR will mix video and scanner images to represent and plan surgical intervention, effects of therapy.
  • diagnostics: it will be possible to forecast the effects of complex combinations of healing treatments.
  • therapy: A valuable exploitation of VR in the medical sector is seen with interest in the therapy of psychiatric/psychological disorders such as acrophobia, claustrophobia, nyctophobia, agoraphobia, eating disorders, etc. Therapeutic techniques will include practices that will allow the patients to reproduce and master problem environments.

For a more detailed description of the use of VR in health care you can read the paper: VR in Health Care: A Survey


5. Issues to be solved

Although the technology is mature enough to have different applications, there are key issues to be resolved for its use for practical applications.

  • costs: The product seem to be “a solution in search of a problem”. As with early computer graphics products, the entry-level costs are relatively prohibitive. A complete VR environment, including workstations, goggles, body suits, and software, is in the range of KEcu 70.000 to KEcu 1.000.000.
  • lack of standard and reference parameters: The hyperbole and sensational press coverage associated with some of these technologies have led many potential users to overestimate the actual capabilities of existing systems. Many of them must actually develop the technology significantly for their specific tasks. Unless their expertise includes knowledge of the human-machine interface requirements for their application, their resulting product will rarely get beyond a “conceptual demo” that lacks practical utility.
  • human factors: The premise of VE seems to be to enhance the interaction between people and their systems. It thus becomes very important to understand how people perceive and interpret events in their environments, both in and out of virtual representation of reality. We must address issues of human performance to understand how to develop and implement VE technology that people can use comfortably and effectively. Fundamental questions remain about how people interact with the systems, how they may be used to enhance and augment cognitive performance in such environments, and how they can best be employed for instruction, training, and other people oriented applications.

6. Conclusion

The marketing situation of VR is very fluid, this means that the technology while being ready for professional applications is not at the stage of settling definite standards and definite reference points in all perspectives, including possible leading manufacturers, compatibility specifications, performance levels, economical costs and human expertise. So standing the situation it is heavily characterised by uncertainty.

This uncertainty should not be confused with lack of confidence on the promising outcomes of the technology, but instead with the rapid mutation and evolution that characterises all information technology markets. For what concerns the project these reflections sound as warning in the adoption of solutions that need to be considered as a short term answer to a contingent problem. A special concern must be raised to a continuos chase of the last up to date technological product release.

In the general aim of the project we take advantage of the capillary diffusion of the PC based technology and to the best associated hardware and software devices available that can ensure both reliability and availability in different domains independently of the different constraints posed by geographical location.

Global Impact: How GPUs Help Eye Surgeons See 20/20 in the Operating Room

Global Impact: How GPUs Help Eye Surgeons See 20/20 in the Operating Room

Global Impact: How GPUs Help Eye Surgeons See 20/20 in the Operating Room

Editor’s note: This is one in a series of profiles of five finalists for NVIDIA’s 2016 Global Impact Award, which provides $150,000 to researchers using NVIDIA technology for groundbreaking work that addresses social, humanitarian and environmental problems.

Performing ocular microsurgery is about as hard as it sounds — and, until recently, eye surgeons had practically been flying blind in the operating room.

Doctors use surgical microscopes suspended over a patient’s eyes to correct conditions in the cornea and retina that lead to blindness. These have limited depth perception, however, which forces surgeons to rely on indirect lighting cues to discern the position of their tools relative to sensitive eye tissue.

But Joseph Izatt, an engineering professor at Duke University, and his team of graduate students are changing that. They’re using NVIDIA technology to give surgeons a 3D, stereoscopic live feed while they operate.

“This is some of the most challenging surgery there is because the tissues that they’re operating on are very delicate, and particularly valuable to their owners,” said Izatt.

Duke is one of five finalists for NVIDIA’s 2016 Global Impact Award. This $150,000 grant is awarded each year to researchers using NVIDIA technology for groundbreaking work that addresses social, humanitarian and environmental problems.

Comparison of conventional rendering (left) and enhanced ray casting with denoising (right) of the anterior segment.

Comparison of conventional rendering (left) and enhanced ray casting with denoising (right) of the anterior segment.

Two Steps Beyond Standard Practice

Standard practice for optical microsurgery is to send the patient for a pre-operation scan. This generates images that the surgeon uses to map out the disease and plan surgery. Post-operation, the patient’s eye is scanned again to make sure the operation was a success.

State-of-the-art microscopes go one step further. They use optical coherence tomography (OCT), an advanced imaging technique that produces 3D images in five to six seconds. Izatt’s work goes another step beyond that by taking complete 3D volumetric images, updated every tenth of a second and rendered from two different angles, resulting in a real-time stereoscopic display into both microscope eyepieces.

“I’ve always been very interested in seeing how technology can be applied to improving people’s lives,” said Izatt, who has been working on OCT for over 20 years.

His team is using our GeForce GTX TITAN Black GPU, CUDA programming libraries and 3D Vision technology to power their solution. Rather than having to do pre- and post-operation images to gauge their success, surgeons can have immediate feedback as they operate.

3D Images at Micrometer Resolution

A single TITAN GPU takes the stream of raw OCT data, processes it, and renders 3D volumetric images. These images, at a resolution of a few micrometers, are projected into the microscope eyepieces. CUDA’s cuFFT library and special function units provide the computational performance needed to process, de-noise, and render images in real time. With NVIDIA 3D Vision-ready monitors and 3D glasses, the live stereoscopic data can be viewed by both the surgeon using the microscope and a group observing the operation as it occurs—a useful training and demonstration tool.

Resolution of abnormal iris adhesion in full thickness corneal transplant. The top row shows the abnormal iris adhesion (red arrow) in the normal en-face surgical view seen through the operating microscope (left), volumetric OCT (middle), and cross sectional scan (left). The bottom row shows the result of the surgeon injecting a viscoelastic material to resolve the abnormal adhesion (green arrow).

Resolution of abnormal iris adhesion in full thickness corneal transplant. The top row shows the abnormal iris adhesion (red arrow) in the normal en-face surgical view seen through the operating microscope (left), volumetric OCT (middle), and cross sectional scan (left). The bottom row shows the result of the surgeon injecting a viscoelastic material to resolve the abnormal adhesion (green arrow).

“The current generation of OCT imaging instruments used to get this type of data before and after surgery typically takes about five or six seconds to render a single volumetric image,” said Izatt. “We’re now getting those same images in about a tenth of a second — so it is literally a fiftyfold increase in speed.”

Thus far, Izatt’s solution has been used in more than 90 surgeries at the Duke Eye Center and the Cleveland Clinic Cole Eye Institute. Out in the medical market, companies are still competing to commercialize real-time 2D displays. Izatt estimates his team’s 3D solution will be ready for commercial use in a couple years.

“The most complex surgeries right now are done in these big centers, but some patients have to travel hundreds or thousands of miles to go to the best centers,” said Izatt. “With this sort of tool, we’re hoping that would instead be more widely available.”

The winner of the 2016 Global Impact Award will be announced at the GPU Technology Conference, April 4-7, in Silicon Valley.

Healthcare Industry Turns to Video Games to Treat MS

The video game industry continues to pave the way for advancements outside of entertainment. Now, a creative new partnership between technology and pharmaceutical giants is reimagining medical imaging, and with it, tackling an incurable and unpredictable central nervous system disease that affects 2.3 million people globally.

Microsoft recently teamed up with Novartis AG to develop AssessMS to treat those suffering from multiple sclerosis (MS). The new program, which uses the Microsoft Kinect’s motion-tracking and camera technology, allows researchers to analyze important data regarding the patient’s physical symptoms, such as gait and dexterity, by recording their movements.

Imprecise measurements and inconsistent assessments of patients’ movements currently complicate patients and doctors’ ability to evaluate the severity of MS symptoms and make informed choices about care and treatment options. These medical difficulties carry over into the pharmaceutical industry, making new drug trials challenging and costly.

Microsoft and Novartis believe their new program can change this calculus by offering more refined data. As patients perform simple body movements and gestures in front of the Kinect motion-sensing camera, doctors are able to evaluate precise information to evaluate the degree of impairment.

So far, prototypes have run hundreds of tests with patients in three of the top MS clinics in Europe. If the system shows promise, Novartis hopes to pursue the clinical validation process and seek regulatory approval.

Healthcare Industry Turns to Video Games to Treat MS

Healthcare Industry Turns to Video Games to Treat MS

This new system marks a break with previous games-based treatment options. Previously, doctors and patients primarily used games such as the Nintendo Wii Balance Board to help people with MS improve their balance. However, as several recent studies have found, games can help patients improve motor skills and visual acuity, sharpen short-term memory, reduce depressive symptoms, and relieve chronic pain, all difficulties common among people with MS.

“This is really super-interesting work,” Tim Coetzee, chief advocacy, services, and research officer at the National MS Society in the U.S. told Bloomberg. “The problem we are trying to solve in MS cries out for tools like this one where it is about being able to give the physician some consistent approach to measure the evolution of the disease.”

With these new advancements, the video game industry has the potential to revolutionize healthcare. As studies and tests continue to improve and evolve, researchers and game companies alike are working together to discover new treatments – even cures – to some of today’s biggest medical challenges.

GPU-Accelerated Supercomputer Targets Tumors

A team of researchers from the  Helmholtz-Zentrum Dresden-Rossendorf (HZDR) research lab in Germany are using the Titan Supercomputer at Oak Ridge National Laboratory to advance laser-driven radiation treatment of cancerous tumors.

Recently, doctors have been using beams of heavy particles, such as protons or ions, to treat cancer tumors. These beams can deposit most of their energy inside the tumor, while at the same time leaving the healthy tissue unharmed. Unfortunately, these beams are generated by large particle accelerators, which make the treatment cost prohibitive for many patients.

The German lab is developing a new therapeutic approach using high-powered lasers instead of inconvenient and expensive particle accelerators.

Proton density after laser impact on a spherical solid density target: irradiated by an ultra-short, high intensity laser (not in picture) the intense electro-magnetic field rips electrons apart from their ions and creates a plasma.

Image Credits: Axel Huebl, HZDR, David Pugmire, ORNL

HZDR researcher Michael Bussmann explains in a recent blog post that they are only able to run such complex simulations because his team has access to GPU-accelerated supercomputers.

The team does all of its calculations on Titan’s Tesla GPUs at a rate 10 to 100 times faster than what is possible on CPU-only machines. “We no longer think of simulations in terms of CPU hours but rather frames per second,” Bussmann said, describing the effect this speed-up has had on the team’s research.



Popular Pages
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho

You are what you believe yourself to be.

Paulo Coelho

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein

Anyone who has never made a mistake has never tried anything new.

Albert Einstein

Gravitation is not responsible for people falling in love.

Albert Einstein

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

Imagination is more important than knowledge.

Albert Einstein

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho


Site by images
Statistics
  • 6,637
  • 19,350
  • 63,548
  • 18,587
Recent News Posts