Unity 3D Project(VR)
An Internship Report on
Creating Assistant in Virtual Reality for
Healthcare
and
Medical Training
Submitted to
Vishwakarma Institute of Technology, Pune
(An Autonomous Institute Affiliated to Savitribai Phule Pune University)
in the requirements of final year
B.Tech.
in
Instrumentation and Control
by
Suraj Darekar
(GR NO. 161316)
Under the guidance of
Prof. (Dr.) Manisha Mhetre
Department of Instrumentation and Control Engineering
Vishwakarma Institute of Technology, Pune-411037
Academic Year: 2019-20
Bansilal Ramnath Agarwal Charitable Trust’s
Vishwakarma Institute of Technology, Pune-37
(An Autonomous Institute Affiliated to Savitribai Phule Pune University)
Certificate
This is to certify that the projects titled, Creating Assistant in Virtual
Reality for Healthcare and Medical Training, submitted by Suraj
Darekar (GR.No.: 161316) is a record of bonafide work carried out by them
under my guidance in partial fulfillment of the requirement of final year in
Instrumentation and Control.
Prof. (Dr.) Manisha Mhetre
Dr.Shilpa Y.Sondkar
Internship Mentor
Head of Department
Dept. of Instrumentation Engineering
Dept. of Instrumentation Engineering
VIT, Pune
VIT, Pune
Date: 7 December 2019
Place: Pune
1
Examiners Certificate
This is to certify that the projects titled, Creating VR Scenarios for
medical applications and DEVELOPMENT submitted by Suraj
Darekar (GR.No:-161316)is approved for the award of the Degree of
Bachelor of Technology in Instrumentation Engineering of Vishwakarma
Institute of Technology, Pune
Signature
Examiner: Prof.(Dr.) Manisha Mhetre
Examiner:
Date: 7 December 2019
Place: Pune
2
Acknowledgements
I consider myself truly fortunate to be able to pursue internship under the
Semester long internship program of the Department of Instrumentation and
Control Engineering. I am eternally grateful to the entire department for
introducing such a thoughtful program for students and making it a big
success. I would like to thank Prof. (Dr.) Shilpa Y. Sondkar, Head of
department Instrumentation Dept; Prof. (Dr.) Manisha Mhetre, Student
mentor, Instrumentation Dept and Prof. Rajendra Patel; Internship Guide,
Instrumentation Dept for their substantial support and guidance throughout
my Internship tenure and for looking after the academic side of the semester
throughout my internship.
The internship at Ethosh Designs Pvt. Ltd. was an amazing learning
experience. Therefore, I feel really fortunate to have been able to be a part of
this program. During the internship I got to acquire a wide spectrum of
knowledge by some phenomenal professionals in the industry. I also came
across some latest technologies in the market which enhanced my knowledge.
I express my deepest thanks to Nikhil Pathak, Asst. General Manager Project Delivery and Technology Innovations and Mr. Rahul Deshpande, CEO
at Ethosh for helping me at every step and giving essential guidance
throughout my internship.
I consider this internship as an eclectic experience in my academic career. I
will preserve the experience and knowledge acquired during this tenure and
definitely utilise it in the coming future. I assure to improve the skills I
learned during my internship constantly and apply them in my professional
life. I sincerely hope to continue the cooperation with all of you in the future.
3
1
INTRODUCTION
Virtual Reality (VR) is the use of computer technology to create a simulated
environment. Unlike traditional user interfaces, VR places the user inside an
experience. Instead of viewing a screen in front of them, users are immersed
and able to interact with 3D worlds. By simulating as many senses as possible,
such as vision, hearing, touch, even smell, the computer is transformed into
a gatekeeper to this artificial world. The only limits to near-real VR experiences are the availability of content and cheap computing power. Although VR
indeed is believed to mark its major portion in the gaming sector, there are
several instances where VR has made some serious implications in the lives of
people and tried to change the overall face of the healthcare segment. We can
say Medical VR is enriched with endless possibilities and even when the area
is freshly introduced there are already some great examples of VR embarking a
positive effect on the lives of both patients and medical practitioners.
1.1
Virtual Reality in Medical Training
Virtual reality headsets are being currently used as a means to train medical
students for surgery. It allows them to perform essential procedures in a virtual,
controlled environment. Students perform surgeries on virtual patients, which
allows them to acquire the skills needed to perform surgeries on real patients. It
also allows the students to revisit the surgeries from the perspective of the lead
surgeon. With the use of VR headsets, students can watch surgical procedures
from the perspective of the lead surgeon without missing essential parts. Students can also pause, rewind, and fast forward surgeries. They also can perfect
their techniques in a real-time simulation in a risk-free environment.
Virtual Reality has the ability to transport you inside the human body – to access view areas that otherwise would be impossible to reach. Currently, medical
students learn on cadavers, which are difficult to get hold of and (obviously) do
not react in the same way a live patient would. In VR however, you can view
4
minute detail of any part of the body in stunning 360 CGI reconstruction create
training scenarios which replicate common surgical procedures.
Fig.1 - VR in Medical Training
5
2
Company Profile
Ethosh Digital is a leading Digital Experience Virtual Reality services company
working with multiple Fortune 500 enterprises; offering persona based experiential strategy, design and development of tools and training for marketing, Sales,
Services and Customers. With our insight-driven approach and solutions, we
have consistently offered tangible improvement to Sales, Marketing and Services
results.
Founded by two seasoned professionals, Ethosh was incorporated on December
01, 2011 as a digital experience virtual reality services company to help organizations create lasting brand impressions and build deeper customer relationships
across their customers’ journey. Starting from humble beginnings out of a small
350 sqft office in Pune, India, Ethosh today has grown into a 75 people strong
organization – with clients across USA, UK, Germany, Denmark, Netherlands,
Australia, and India.
2.1
Values
Excellence.
Team work.
Honesty.
Openness.
Socially Responsible.
Humility.
2.2
Culture
Diversity
Diversity helps our cross-culture team work together in harmony towards one
single goal – to deliver experiences par-excellence.
Innovation
Innovation encourages us to think and apply all our expertise for this one cause.
6
Creativity
Creativity allows us to unleash our inner potential to create unique outcomes.
2.3
The Team
Team Ethosh can be best defined as a set of robust, agile, self-driven, ambitious,
amiable and passionate miracle makers – experts in the field of engineering and
science, creative thinkers, visualizer and technologist who create digital experiences virtual reality services that connect, engage and inspire your customers.
We are passionate about creating visually inspiring, digitally charged customer experiences.
As a team we work towards our mission, transform our vision into reality
and help build a strong foundation for a successful enterprise.
7
3
Literature Review
3.1
Amazon Sumerian
Amazon Sumerian is a set of tools for creating high-quality virtual reality (VR),
augmented reality (AR), and 3D applications easily without requiring any programming or 3D graphics expertise. With Sumerian, you can construct an interactive 3D scene without any programming experience, test it in the browser,
and publish it as a website that is immediately available to users. Use the
Sumerian library of assets or bring your own. Sumerian also has a library of
primitive shapes, 3D models, hosts, textures, and scripts.
The Sumerian community website has a ton of helpful tutorials for every
level of experience. The Sumerian 3D engine provides a library for advanced
scripting with JavaScript to create interactive AR, VR, or 3D! Use the builtin state machine to animate objects and respond to user input like clicks and
movement. The work can be published directly to Amazon CloudFront as a
website that can be viewed with a WebVR-compatible browser. Experiences
can be viewed using desktop, mobile devices, and major VR headsets.
Amazon Sumerian lets you create virtual reality (VR), augmented reality (AR),
and 3D scenes that are made up of components and entities, organized into
projects. Let’s look closely at the concepts used in the Sumerian editor and this
guide.
3.1.1
Scenes
A scene is a 3D space that contains objects and behaviors that define a VR or
AR environment. Objects include geometry, materials, sounds that you import
8
from a supported file format, and objects that you create in the scene like
lights, cameras, and particle effects. Behaviors include state machine behaviors,
animations, timelines, and scripts. When you’re ready to show off your scene,
export it directly to Amazon CloudFront as a static website that you can open
in a browser.
3.1.2
Components and Entities
All objects and behaviors are components that combine to create entities. For
example, when you import a 3D model and add it to a scene, the editor creates
an entity that has a geometry component, a material component, a transform
component, and an animation component. You can then use the editor to add
a rigid body, colliders, and other components to the entity.
3.1.3
Assets
Assets are the images, sounds, scripts, models, and documents that you import
into Sumerian to use in a scene. You can manage assets independently of the
scenes that use them in the asset library. Assets can belong to a user or project.
3.1.4
Hosts
A host is an asset provided by Sumerian that has built in animation, speech,
and behavior for interacting with users.Hosts use Amazon Polly to speak to
users from a text source. You can use hosts to engage users and guide them
through a virtual experience.
Fig.2 - AWS Sumerian Hosts
9
3.1.5
Projects
Projects are an organizational tool for managing scenes, assets, and templates.
3.1.6
Templates
Templates let you save a copy of a scene to use as a starting point for other
scenes. Templates belong to a project. Sumerian provides several templates,
which you can access from the dashboard.
10
3.2
Unity 3D
Unity is a cross-platform game engine developed by Unity Technologies, first
announced and released in June 2005 at Apple Inc.’s Worldwide Developers
Conference as a Mac OS X-exclusive game engine. As of 2019, the engine has
been extended to support more than 25 platforms. The engine can be used
to create three-dimensional, two-dimensional, virtual reality, and augmented
reality games, as well as simulations and other experiences. The engine has
been adopted by industries outside video gaming, such as film, automotive,
architecture, engineering and construction.
3.2.1
Unity Editor
The Unity Editor features multiple tools that enable rapid editing and iteration
in your development cycles, including Play mode for quick previews of your
work in real-time.
All-in-one editor:
Available on Windows, Mac, and Linux, it includes a range of artist-friendly
tools for designing immersive experiences and game worlds, as well as a strong
suite of developer tools for implementing game logic and high-performance
gameplay.
2D and 3D:
Unity supports both 2D and 3D development with features and functionality
for your specific needs across genres.
AI pathfinding tools:
Unity includes a navigation system that allows us to create NPCs that can intelligently move around the game world. The system uses navigation meshes that
are created automatically from your Scene geometry, or even dynamic obstacles,
to alter the navigation of the characters at runtime.
Efficient workflows:
Unity Prefabs, which are preconfigured Game Objects, provide you with efficient and flexible workflows that enable you to work confidently, without the
11
worry of making time-consuming errors.
User interfaces:
Its built-in UI system allows us to create user interfaces fast and intuitively.
Physics engines:
We can take advantage of Box2D, the new DOTS-based Physics system and
NVIDIA PhysX support for highly realistic and high-performance gameplay.
Custom tools:
We can extend the Editor with whatever tools you need to match your team’s
workflow. We can create and add customized extensions or find what you need
on our Asset Store, which features thousands of resources, tools and extensions
to speed up our projects.
Better Collaboration:
One can see what others are working on, right in the Unity editor.
3.2.2
Engine Performance
We can optimize our interactive creations with a top performing engine that
keeps on improving.
• Advanced profiling tools: We can continuously optimize our content throughout development with Unity’s profiling features. Check to see if your content
is CPU or GPU-bound, for example, and pinpoint those areas that require improvement, so we can provide your audience with a smooth-running experience.
• Native C++ performance: We can benefit from our cross-platform native
C++ performance with the Unity-developed backend IL2CPP (Intermediate
Language To C++) scripting.
• Scripting runtime Mono /.NET 4.6 / C 7.3
• High-performance multithreaded system:
We can fully utilize the
multicore processors available today (and tomorrow), without heavy programming. It’s new foundation for enabling high-performance is made up of three
sub-systems: the C Job System, which gives you a safe and easy sandbox for
writing parallel code; the Entity Component System (ECS), a model for writ-
12
ing high-performance code by default, and the Burst Compiler, which produces
highly-optimized native code.
3.2.3
Graphics Rendering
• Real-time rendering engine: We can produce amazing visual fidelity with
Real-Time Global Illumination, Raytracing and Physically Based Rendering.
• Native Graphics APIs: Unity supports multiplatforms, but still stays close
to the low level graphics API of each platform, allowing you to take advantage of
the latest GPU and hardware improvements, like Vulkan, iOS Metal, DirectX12,
nVidia VRWorks or AMD LiquidVR.
• Scriptable Render Pipeline: We can create custom render pipelines or
pick the best configuration for your project, the new HDRP or Universal Render
Pipeline (formerly the Lightweight Render Pipeline) from Unity.
3.2.4
Unity Asset Store
The Unity Asset Store is home to a growing library of free and commercial
Assets created both by Unity Technologies and also members of the community.
A wide variety of Assets is available, covering everything from Textures, Models
and animations to whole Project examples, tutorials and Editor extensions. You
can access the Assets from a simple interface built into the Unity Editor which
allows you to download and import Assets directly into your Project. Unity
users can become publishers on Asset Store, and sell the content they have
created. During your first visit, you can create a free user account which allows
you to log into the Store on future visits and keep track of previous purchases
and downloads.
13
3.3
Comparison between Amazon Sumerian and Unity 3D
Unity 3D
AWS Sumerian
• Game Development Experience is needed for
• Easy to use software with a beginner game
game development in Unity.
developer.
• Offline Software
• High speed internet connection required for
smooth operation.
• No Size Limitation, accepts both FBX and
• Size Limitation of 50MB for Assets, assets
OBJ files.
are required to be triangulated.
• Uses C scripting
• Uses JavaScript.
• Scripting is required for any behavior and
• State Transition Machine enables easy flow
flow of game.
and Behavior of Game.
• All Softwares can be integrated.
• Only AWS Softwares can be accessed.
• Scene-Layout and Game-Layout gives an
• Cannot preview scenes in activity layout.
overview of how the scene might look like.
Table no.: 1 : Comparison between Unity3D and AWS Sumerian
3.4
SimX VR
SimX’s software replaces your physical simulation mannequins with a customizable, high-definition, 3D virtual patient that can be projected anywhere. Whether
obese, pregnant, young, old, vomiting, missing limbs, bleeding, or expressing any
number of other physical signs and symptoms, SimX’s software allows you to
reproduce patient presentations with unprecedented visual fidelity.
SimX allows up to 4 trainees to work around the same virtual patient, completely wirelessly. You can combine systems to have even more players working
together, even if they’re located across the world from each other. Sets up anywhere in less than 10 minutes. Turn any space into a sim center in minutes.
14
Figure No. : 3 - Virtual Environment in SimX
3.5
3.5.1
Natural Language Processing
Speech-to-Text Plugins:
1. IBM Watson STT:
Watson Speech to Text is a cloud-native solution that uses deep-learning
AI algorithms to apply knowledge about grammar, language structure,
and audio/voice signal composition to create customizable speech recognition for optimal text transcription.
2. Google Cloud Speech Recognition:
Google Cloud Speech-to-Text enables developers to convert audio to text
by applying powerful neural network models in an easy-to-use API.
3.5.2
Text-to-Speech Plugins:
1. IBM Watson TTS:
The IBM R Text to Speech service provides APIs that use IBM’s speechsynthesis capabilities to synthesize text into natural-sounding speech in
a variety of languages, dialects, and voices. It enables your systems to
15
“speak” like humans, customize and control pronunciation. It delivers a
seamless voice interaction that caters to your audience with control over
every word. It synthesizes across languages and voices,can convert in
English, French, German, Italian, Japanese, Spanish and Brazilian Portuguese. Detects different dialects, such as U.S. and UK English and
Castilian, Latin American, and North American Spanish.
2. Google Cloud TTS:
Google Cloud Text-to-Speech enables developers to synthesize naturalsounding speech with 100+ voices, available in multiple languages and
variants. It applies DeepMind’s groundbreaking research in WaveNet and
Google’s powerful neural networks to deliver the highest fidelity possible.
As an easy-to-use API, you can create lifelike interactions with your users,
across many applications and devices.
3.5.3
Virtual Assistant Plugins:
1. IBM Watson Assistant:
Assistant enables you to create an application that understands naturallanguage and responds to customers in human-like conversation – in multiple languages. Seamlessly connect to messaging channels, web environments and social networks to make scaling easy. Easily configure a
workspace and develop your application to suit your needs. There is no
limit to what you can do.
2. Google Cloud Natural Language:
The cloud Natural Language API is a Google service that offers an interface to several NLP models which have been trained on large text corpora.
The API can be used for entity analysis, syntax analysis, text classification, and sentiment analysis. Natural Language uses machine learning
to reveal the structure and meaning of text. You can extract information about people, places, and events, and better understand social media
16
sentiment and customer conversations. Natural Language enables you to
analyze text and also integrate it with your document storage on Google
Cloud Storage.
3.5.4
DialogFlow
Dialogflow is a Google-owned developer of human–computer interaction technologies based on natural language conversations. The company is best known
for creating the Assistant, a virtual buddy for Android, iOS, and Windows
Phone smartphones that performs tasks and answers users’ question in a natural language.
Powered by Google’s machine learning:
Dialogflow incorporates Google’s machine learning expertise and products such
as Google Cloud Speech-to-Text.
Built on Google infrastructure:
Dialogflow is a Google service that runs on Google Cloud Platform, letting you
scale to hundreds of millions of users.
Optimized for the Google Assistant:
Dialogflow is the most widely used tool to build Actions for more than 400M+
Google Assistant devices.
Fig.4 - Working of DialogFlow
17
3.5.5
VR Headsets
A virtual reality headset is a head-mounted device that provides virtual reality
for the wearer. Virtual reality (VR) headsets are widely used with video games
but they are also used in other applications, including simulators and trainers.
They comprise a stereoscopic head-mounted display (providing separate images
for each eye), stereo sound, and head motion tracking sensors (which may include gyroscopes, accelerometers, magnetometers, structured light systems etc.).
Some VR headsets also have eye tracking sensors and gaming controllers.
Constraints:
1. Latency Requirements:
If the system is too sluggish to react to head movement, then it can cause
the user to experience virtual reality sickness, a kind of motion sickness.
2. Resolution and display quality:
The image will appear clearly due to the display resolution, optic quality,
refresh rate, and the field of view.
3. Lenses:
The lenses of the headset are responsible for mapping the up-close display
to a wide field of view, while also providing a more comfortable distant
point of focus. Fresnel lenses are commonly used in virtual reality headsets
due to their compactness and lightweight structure.
4. Controllers:
These gaming devices use virtual reality to control avatars within a game,
where the player’s movements are copied by the avatar to complete the
game.
18
There are two primary categories of VR devices:
Standalone - devices that have all necessary components to provide virtual
reality experiences integrated into the headset. Mainstream standalone VR
platforms include:
• Oculus Mobile SDK, developed by Oculus VR for its own standalone headsets
and the Samsung Gear VR.
• Google Daydream, a virtual reality platform built into Google’s Android operating system since version 7.1.
Tethered - headsets that act as a display device to another device, like a PC
or a video game console, to provide a virtual reality experience. Mainstream
tethered VR platforms include:
• SteamVR, part of the Steam service by Valve Corporation. The SteamVR
platform uses the OpenVR SDK to support headsets from multiple manufacturers, including HTC, Windows Mixed Reality headset manufacturers, and
Valve themselves. A list of supported video games can be found here.
• Oculus PC SDK for Oculus Rift and Oculus Rift S. The list of supported
games is here.
• Windows Mixed Reality (also referred to as ”Windows MR” or ”WMR”),
developed by Microsoft Corporation for Windows 10 PCs.
• PlayStation VR, developed by Sony Computer Entertainment for use with
PlayStation 4 home video game console.
• Open Source Virtual Reality (also referred to as ”OSVR”).
19
Few of the best VR Headsets:
1. HTC Vive:
Pros:
• One of the best VR experiences.
• Having Valve as software partner.
Cons:
• Highly expensive.
• Requires a highly efficient GPU.
2. PlayStation VR :
Pros:
• Quite affordable.
• Almost PC level performance.
• Nice selection of games.
Cons:
• Lacks some essential accessories.
• Blemished motion controller tracking.
• Issues with sealing out lights.
3. Oculus Rift :
Pros:
• Fits very comfortably.
• Coolest VR games.
Increasing list of apps and movies.
Cons:
• Extensive PC requirements.
• Tendency to cause nausea.
4. Samsung Gear VR :
Pros:
• Smaller design.
• Effectively lightweight.
20
• Convenient for spectacles wearers.
Cons:
• A bit expensive compared to its benefits.
• Compatible only with Samsung smartphones.
• Limited games and other contents.
• Might cause nausea.
4
4.1
Requirement Analysis
Oculus GO
The Oculus Go is a standalone virtual reality headset released on May 1, 2018.
It was developed by Oculus VR in partnership with Qualcomm and Xiaomi,
and a Xiaomi-branded version is being sold in China as the Mi VR Standalone.
The Go is an untethered all-in-one headset, meaning it contains all the necessary components to display graphics and doesn’t require a connection to an
external device to use. It’s equipped with a Qualcomm Snapdragon 821 SoC
and is powered by 2600 mAh battery. It uses a single 5.5-inch LCD display
with a resolution of 1280 x 1440 pixels per eye and a refresh rate of 72 or 60
Hz, depending on the application. Input is provided with a wireless controller
that functions much like a laser pointer. The headset and controller utilise
non-positional 3-degrees-of-freedom tracking, making it capable of seated or
static-standing activities but unsuitable for roomscale applications.
21
Screen
2560x1440 @ 72Hz
Lenses
Custom Fresnel
FoV
101-degrees
SoC
Snapdragon 821
RAM
4GB
Storage
32/64GB
Battery
2600mAh
Connectivity
WiFi
Audio
Speakers/3.5mm jack
Weight
177grams
Price
199/249
Table no. 2 : Specifications of Oculus Go
On the front panel you’ll see Oculus Go has been designed to reduce heat in
such a way that overheating shouldn’t ever be a problem while wearing the
headset. The metallic front panel conducts heat well, and a gap around the
entire front rim helps with air flow where it is most needed. In all of our
testing, the headset has yet to offer up a temperature warning. Oculus Go
comes in two models, available for those who would prefer more or less storage
in the headset. These headsets are visually identical, but one has a total
capacity of 32GB while the other supports 64GB.
Fig.5 - Oculus Go VR Headset
22
4.1.1
Advantages of Oculus Go
1. Comfortable and easy to use.
2. Inexpensive compared to other VR headset.
3. Large library of VR apps.
4. Remote is comfortable with accurate motion tracking.
4.1.2
Drawbacks of Oculus Go
1. Still a bit limited by its mobile hardware.
2. Battery life could be better.
23
5
5.1
Activities
Creating Assistant
Speech Commands: When the user commands the assistant to bring some
object,the assistant will search for the position of the object and this is done
using Raycasting,where the ray hits the game object and returns the name of
the object.
Object Detection:Creating an Assistant to bring any object as ordered by
the user and returned to its original position.Therefore we created a speech
plugin which will have a delay of just microseconds.
Grabbing of Object:Creating a character(Rigged Model) from blender to
grab any object of any size.We aslo need to create an animation of grabbing
and an animator controller to control the animation of the of character on
triggered.
Returning to its original position:Once the object is given to the user the
character will return to its orginal position which is predefined through our
Csharp script.
5.2
5.2.1
RawMocap Data Mechanim
Acquire some Mocap Data
Firstly,acquire some Mocap Data.To get high-quality ready-to-use character
animations for Unity, we can use Mixamo. Mixamo’s technologies use machine
learning methods to automate the steps of the character animation process,
including 3D modeling to rigging and 3D animation.
24
Fig.12 - Mocap Data
5.2.2
Cleanup and Convert to FBX
Mocap data can be stored using a number of different file formats – FBX,
BVH, C3D, and BIP are the most common. However, before you import into
Unity, you’ll need to convert all your animation clips to FBX. You may also
need to do some cleanup work (in terms of quality).
25
Fig.13 - Mocap Cleanup Process.
5.2.3
Import FBX to Unity
Once we’ve converted our mocap file to FBX format, drag it into your Unity
project and select it in the Project explorer. In the Inspector panel, convert
the rigged model to Humanoid to use the character animations in our project.
5.3
Google Speech Recognition
As the speech plugins mentioned above we have tried several speech services
like Windows Speech Recognition,Sapi,IBM Watson,etc.The problem with
these plugins was some were supporting only windows and Oculus VR headset
is an Android device and IBM Watson supports all platforms but it only
recognizes ’US’ English which also has delay of around 5-6 sec to analyse STT.
Dialog flow is an assistant developed by google which also has a unity sdk but
it was not supporting unity for STT.Therefore,we created a STT plugin in
Android Studio and just imported its Classes.jar and Android Manifest file to
unity and made C sharp code to call the STT package from that classes.jar
file.Google STT also had a delay of just microseconds which also recognizes
120 languages.
26
Fig.14 - Google STT.
Using this speech plugin we created a project to change the color of cube on
speech commands.Created a library of more than 1300 colors.
Fig.15 - Cube.
5.4
Pre-checklist
A checklist is a type of job aid used to reduce failure by compensating for
potential limits of human memory and attention. It helps to ensure
consistency and completeness in carrying out a task. A basic example is the
”to do list”. A more advanced checklist would be a schedule, which lays out
tasks to be done according to time of day or other factors. A primary task in
checklist is documentation of the task and auditing against the documentation.
5.4.1
Application in Healthcare
Checklists have been used in healthcare practice to ensure that clinical
practice guidelines are followed. An example is the WHO Surgical Safety
Checklist developed for the World Health Organization and found to have a
large effect on improving patient safety. According to a meta-analysis after
27
introduction of the checklist mortality dropped by 23 percent and all
complications by 40 percent, higher-quality studies are required to make the
meta-analysis more robust.
5.4.2
Our implementation of Checklist in VR
Watson assistant helps users to traverse throughout the checklist, and
communicate with the entities subsequently, with the help of intents. The user
currently plays the role of Operating Surgeon in VR, and provides inputs to
the assistant, which serve as the inputs to other characters in the VR scenario.
There are different intents assigned to respective characters. As soon as the
user gives the input, depending upon the keyword recognition, the character
outputs the corresponding reply. The user can provide input to the assistant,
in any random manner, and is not constrained to the sequential flow of the
dialogue.
5.5
Future Scope
There is a huge increase in the use of VR in the healthcare sector and continue
to be fascinated and impressed by the different applications .From developing
new life-saving techniques to training the doctors of the future, VR has a
multitude of applications for health and healthcare, from the clinical to the
consumer. By 2020, the global market could be worth upwards of 3.8 billion
dollars. Below are some of the ways Virtual Reality is being used to train
support healthcare professionals, change lives and heal patients.
• Medical training
• Patient treatment
• Medical marketing
• Disease awareness
Virtual reality technologies allow an operation to be practised, and the
outcome viewed, before the patient undergoes surgery—such as in breast
reconstruction and corrective maxillofacial surgery. Thus, the surgical
28
approaches can be optimised and rehearsed, with obvious advantages for
patients and healthcare providers.
In keeping with the early attempts to introduce technologies such as virtual
reality and robotics into other markets, a sea change in medical opinion will be
required and a massive learning curve will have to be overcome if the advances
already achieved, never mind those to come, are to be translated into realities
in health care.
6
Conclusion
In every generation, there is a technology that defines them. For our parents,
it was the internet. For us, it was the iPhone. For the next generation —it will
be virtual reality.The major obstacle is the eye strain,caused by VR
Headsets.But if we use it constructively and for our development VR enables
us to experience the virtual world that is impossible in real world.
29