Hypertech

Page 1

WNS MBA REGIS TSINFOTE011617

HYPERTECH

What’s Inside: Meet the WNS Regis MBA Program Batch 1 Members

1

2

3

3D PRINTING

AUGMENTED REALITY

CLOUD COMPUTING

All contents on this e-magazine are from the Emerging Technology papers of the class (Information Technology) Ateneo Graduate School of Business Rockwell Center, Makati City SY 20016—2017


Foreword Facebook invested US$ 2 billion to acquire Oculus Rift technology on virtual reality as it embeds this in its cloud platform. 3D Bio-Printing which emerged from the more generic 3D Printing (aka Additive Manufacturing) has the potential to disrupt today’s healthcare technologies towards increasing human life span by printing human organs (kidney, heart, pancreas) grown from the patient’s own stem cells. One day companies like Organovo may be able to simply harvest a grown adults’ stem cells from a blood draw, use a specialized 3D printer to build an organic, polymeric scaffolding in the shape of the organ or tissue that needs to be replicated, and literally grow a kidney, heart, lungs, within a matter of days or

This issue of Hypertech E-magazine features 3 emerging technologies: • Augmented Reality • 3D Bio-Printing • Cloud Computing

“Facebook’s founder, Mark Zuckerberg, is focusing on 3 main technologies that will drive FB into the next decade: augmented reality/virtual reality, artificial intelligence, and connectivity. ”

weeks. In theory, pluripotent stem cells can be harnessed safely from the intended transplant recipients, without damage to any unborn fetuses. They offer patients no chance of organ rejection due to their self origin, and bypass the need for endless waiting lists where patients may never find themselves at the top before it’s too late. The next stage of cloud computing shall be AIdriven. During the past few years, cloud computing has become a mainstream element of modern software solutions just as common as websites or databases. The cloud computing market is a race vastly dominated by four companies: Amazon, Microsoft, Google and IBM with a few other platforms with traction in specific regional markets such as AliCloud in China. In such a consolidated market, it’s hard to imagine a technology being disruptive enough to alter the existing dynamics. Artificial intelligence (AI) is the type of technology with the potential to not only improve the existing cloud platform incumbents but also power a new generation of cloud computing technologies. Cloud computing represents a big opportunity to avail of hardware and software technologies for SMEs which would have otherwise been available only to big business with big capital expenditure budgets through the “pay as you go” scheme or sachet pricing. The Strengths-Weaknesses-Opportunities-Threats of these 3 technologies with their corresponding costbenefits, ethical and nation-building implications are discussed in the following papers. Prof. Gary A. Grey Hypertech Magazine Issue Infote Regis WNS October – December 2016


Augmented Reality

Augmented reality is the joining of computerized data with the client’s surroundings progressively. Not at all like virtual reality, which makes an absolutely counterfeit environment, increased reality utilizes the current environment and overlays new data on top of it. The applications are composed in extraordinary 3D programs that permit the designer to tie activity or logical computerized data in the PC program to an enlarged reality “marker in this present reality. At the point when a figuring gadget’s AR application or program module gets advanced data from a known marker, it starts to execute the marker’s code and layer the right picture or pictures. A tracker framework for deciding the relative position between a sensor and a protest surface, for the most part including a sensor or sensors for distinguishing an example of fiducial arranged on a surface and a processor associated with the no less than one sensor.

A strategy for following the position and introduction of a question for the most part containing the means of filtering over a protest recognize fiducial and frame video runs, clustering video quickly to distinguish an example of fiducials, securing assessed values for an arrangement of following parameters by contrasting an identified example of fiducials with a reference example of fiducials, and repeating the evaluated values for the arrangement of the following parameters until the recognized examples of object match the reference example of the object. A technique of augmenting the reality for the most part containing the means of arranging an example object on one surface, following the position and introduction of the action, recovering and preparing the virtual data to put away in a PC memory as per the position and introduction of the object and giving the virtual data genuine data to a user. AR application for smartphones ordinarily incorporate GPS to pinpoint the user’s area and it compass to identify the gadget.


Advanced AR programs utilized by the military actions may incorporate machine vision, object acknowledgment and motion acknowledgment.

History of Augmented Reality In 1968, Ivan Sutherland at the University of Utah, with his student, Bob Sproull, made the first Virtual Reality (VR) and Augmented Reality (AR) head mounted display systems. Sutherland’s head mounted gadget was heavy to the point that it must be suspended from the roof, and the impressive appearance of the gadget motivated its name, the Sword of Damocles. The framework was primitive both as far as user interface and originality, and the representation involving the virtual environment were basic wireframe rooms. The framework showed yield from a PC program in the stereoscopic show. The point of view that the product demonstrated that user would rely on the position of the user’s view, which is the reason why the head following was essential. The weight of Sutherland’s HMD, and the need to track the head movements required the HMD being joined to a mechanical arm suspended from the roof of the lab. The considerable appearance of the system motivated its name. A user needed to have his or her head safely secured into the gadget to play out the process.


In the Mid-1970s, Myron Krueger set up a simulated reality research center called the Videoplace. His thought with the Videoplace was the formation of augmented reality that encompassed the clients, and reacted to their developments and activities, without being hampered by the utilization of goggles or gloves. The work done in the lab would frame the premise of his 1983 book Artificial Reality.

The Videoplace utilized projectors, camcorders, exceptional equipment, and on-screen outlines of the users to put the users inside a situation. Clients in discrete room in the lab could associate with each other through this innovation. The developments of the users recorded video were dissected and exchanged to the otline representations of the users in the Artificial Reality environment. By the user having the capacity to outwardly observe the consequences of their activities on screen, using the rough however successful hued outlines, the users had a feeling of nearness while associating with the on-screen objects and different clients despite the fact that there was no immediate material input accessible. The feeling of nearness was sufficient that user pulled away when their outlines met with those of different users. In 1990, Boeing scientist Tom Caudell initially instituted the expression "augmented reality" to portray a computerized show utilized via air ship circuit testers that mixed virtual representation onto a physical reality. With respect to the software engineering world's meaning of enlarged reality (AR) however, it's more itemized, yet basically the same: Augmented the truth is the communication of superimposed representation, sound and other sense

upgrades over progressively.

a true situation that is shown

To think of a contrasting option to the costly charts and checking gadgets then used to guide specialists on the manufacturing plant floor. They proposed

supplanting the huge plywood loads up, which contained exclusively outlined wiring directions for every plane, with a head-mounted device that would show a plane's particular schematics through cutting edge eyeware and extend them onto multipurpose, reusable loads up. Rather than reconfiguring every plywood board physically in every progression of the assembling procedure, the modified wiring directions would basically be worn by the laborer and changed rapidly and proficiently through a PC framework. The 1993 paper of Rosenberg Virtual fixtures: Perceptual devices for telerobotic control clarifies what they are, and the convention he utilized as an application. The thought is that the client wears an exoskeleton to play out an undertaking from the administrator space to the remote environment. Moreover, either the exoskeleton or physical articles (in the apparatus board) will compel the conceivable developments of the arm of the administrator, similar to a physical ruler obliges the development of your pencil (so you can draw a straight line much quicker and with more exactness). The contrast between the virtual installation and the ruler is that the virtual apparatus does not physically exist in the remote environment, i.e. where the assignment is really performed (it can exist in the administrator space), yet regardless you get its haptic (tangible) criticism.


For example, it resembles if a specialist could have a ruler amid an operation, however for clear reason you can't physically put this ruler through the patient's body.

not just how we scan for data on the Internet, however how we process it also. Similarly, that the Wikipedia database develops every day, a library worked through AR will in the end contain subtle elements on any 3-D question you can set your eyes on in the physical world. In 1994, Julie Martin makes initially 'Augmented Reality Theater Production', Dancing in Cyberspace, subsidized by the Australia Council for the Arts, highlights artists and trapeze artists controlling body–sized virtual protest continuously, anticipated into the same physical space and execution plane. The gymnastic performers showed up submerged inside the virtual protest and situations. The establishment utilized Silicon Graphics PCs and Polhemus detecting framework.

Classroom application of AR Schools and colleges around the globe grasp innovation in the classroom, using tablets and SMART sheets in ordinary lessons. Enlarged reality will assume a significant part in keeping on digitalizing and change the way we instruct, displaying new open doors for understudies and educators to connect with instructive substance. From virtual dismemberment tests to intuitive reading material, understudies will profit by a more immersive learning knowledge, and one that genuinely sets them up for STEM (science, innovation, designing and math) vocations. This implies the "bring your own particular gadget" pattern may even catch on in the classroom.

Wearable Technology

Trend in the Technology Creating a Wikipedia for 3D Objects By utilizing picture acknowledgment innovation on our cell phones and tablets, clients will have the capacity to check and recognize 3-D objects like plants, furniture, or even auto makes and models. Before long, AR will upset

Increased reality will make wearable innovation more predominant. It's now found in the wellbeing and wellness markets with the landing of Nike+ Fuelband and Fitbit Flex, yet Google Glass will keep on ushering in another influx of AR-empowered wearable figuring gadgets. Be that as it may, Forrester Research has officially portrayed "wearables" as 'the following influx of buyer innovation item development.' Though numerous AR encounters presently require a cell phone or tablet – as a camera is expected to recognize the picture – the tech monsters of Apple, Samsung, and more will begin revealing own their product offerings for standalone gadgets, where wearable innovation will probably turn into a bit of ordinary design.


Moving beyond Augmented Reality

 

No ifs ands or buts, the AR continuum will keep on evolving. A hefty portion of us have effectively experienced AR firsthand on items and ads, where another layer of data is included top of this present reality around us. Before long, we'll see this pattern move towards "augmented virtuality" and virtual situations – like Wii and active sensors, where clients will have their movements reproduced by 3D digitization. In the end, this will end up being an entire 360-degree virtual experience where clients are completely submerged in an exact computerized world. The main issue is that expanded reality has touched base, as well as is digging in for the long haul. By intersection verticals and giving new chances to intelligent, instructive, and engaging encounters that have not existed some time recently, the innovation will soon hold a critical place in our day by day lives. The greatest test is mass purchaser reception, however as gadgets get quicker and shoppers turn out to be further instructed on the employments of AR, it won't be long until yesterday's enlarged reality turns into today's Internet.



Augmented Reality in the Field of Education, Business and Healthcare

Augmented reality is a cutting-edgethat offers a highly engaging experience that leveragesfollowing technologies: camera lens, the Global Positioning System (GPS) and Wi-Fi in your smart phone, tablet or laptop.combines the view through your camera lens with computer-generated dataand imagery to augment, or enhance, what the user perceives his looking at. According to Azuma, R., AR is considered to be a variation of Virtual Reality (VR). The difference – VR technology provides a synthetic environment where users immerse themselves. While immersed, the user cannot see the real world around him. On the other hand, AR allows the user to see the real world, with virtual objects superimposed upon the real world. Hence, AR supplements reality, rather than replace it. AR enables the use of innovative learning solutions for education. As an example, there are learning solutions specifically designed for mathematics and geometry education called Construct3D. Construct3D is based on the mobile collaborative AR system “Studierstube.” According to the study made at the Institute of Software Technological and Interactive Systems, evidence supports that Construct3D is easy to learn, encourages experimentation with geometric constructions and improves spatial skills. In evaluating the value-add of a technological innovation, the following factors need to be considered. In the case of Construct3D, its evaluation can be done using the following parameters that can ensure better usability and user experience. Prisca Bonnet, in “Human Interaction with Augmented Reality” highlights the following

factors that can be used for evaluating learning experience using AR. a. Affordance to make the application have an inherent connection between a user interface and its functions. This can be achieved by providing a model describing subject-object relationships. b. Reducing cognitive overhead to allow the focus on the actual task instead of overwhelming the user with information resulting in poor user experience. c. Low physical effort to make a task accomplishable with a minimum of interaction steps. Applications with interaction that is too complex will not be successful. d. Learnability to provide easy learnability for a user. As AR provides novel interaction techniques, the usage of those techniques has to be easy to learn. e. Responsiveness to guarantee good performance of the application as users only tolerate a certain amount of system lag f. Error tolerance to deliver stable applications. As AR systems are mostly in early development stages there exist many bugs. Applications should be able to continue working even when experiencing an error. g. Rapid Prototyping to implement user interaction designs. h. Usability Evaluation to evaluate the usability

In recruitment, Augmented and Virtual Reality can be used to give office tours to potential candidates as well as presenting a day in the life of an employee at the company people are applying to. This has various benefits, such as: Allowing candidates to make an informed decision about the job Increasing number of employees staying at the company Reducing number of employee resignations


There are also advantages of using Augmented Reality in training and development. A virtual environment can be created, and hazards can be augmented into it for health and safety training for existing employees. Also, companies can provide customer service training by showing correct body language, tone of voice, etc. through Augmented Reality. In addition, Augmented Reality can aid workplace flexibility. Certain AR apps give details of employees’ time and attendance via various technologies such as clock software and bio-metric fingerprints. Applications of AR addressed in the article are as follows:

Employee training and on-boarding: Engaging and interactive training sessions can be created via AR technology. An overlay of graphics or on-screen instructions can be displayed from a headset or smart glasses to train employees. Remote training can also be given. Virtual tours / showrooms: businesses can utilize AR to show consumers their products virtually, enabling an in-person experience. Project prototyping: project owners can virtually present their product to investors and demonstrate its features. Navigation and routing: heads-up displays allow drivers who are transporting products for their business to navigate their way to the desired location in the fastest time possible.

The use of VR/AR also applies in healthcare: as a tool to aid doctors in medical procedures and day-to-day tasks for physical therapy and to treat phobias like fear of heights, and to increase access to doctors through virtual visits. The case of Google Glass. When Google Glass first reached the market, Google offered select hospitals Glass gadgets to test the product. Surgeons used them for a range of functions like projecting CT scan and MRIs on to the field of vision as he or she would operate, scanning bar codes to

gain basic medical information about the patient and alerting the doctor with laboratory results. In doing therapy, the technology can treat patients with anxiety disorders. These virtual worlds can serve as an artificial stimulus in order to habituate the patient to those environments causing anxiety. Further, doctors cab also used Google Glass in doing patient visits and AR can enhance the experience. From 3-D operating room simulations to mental health treatments, VR and AR are expected to have vast applications. These advancements are predicted to generate $2.54 billion globally by 2020, according to one estimate. For example, in April 2016, the Royal London Hospital broadcasted a surgery for colon cancer, streamed live to VR headsets and smartphones using 360-degree camera. The idea is a simple yet powerful way to transfer knowledge and skills. Similarly, virtual organ models could allow surgeons to better prepare for delicate surgeries. In a report done by DHL, AR has shown most promise for logistics in warehousing operations. These operations are estimated to account for about 20 % of all logistics costs, and the task of picking accounts for 55 % to 65 % of the total cost of warehousing operations. This indicates that AR has the potential to significantly reduce cost by improving the picking process. It can also help with the training of new and temporary warehouse staff, and with warehouse planning.


Technology Briefing Authoring Augmented Reality experiences involves selecting and/or preparing digital assets specifying physical-world targets with which they will be associated and specifying the interactions between the digital assets and physical targets. Once the enterprise customer has clear project justification and purpose, the information assets are available (or created for this project) and the interactions are planned, the initiation of an AR project begins.

AR Application Development Environment AR development environments vary widely depending on audience, ranging from software engineers to people with no programming background. Approach 1 - A company may invest in DEVELOPING a proprietary development environment with programmers who are experienced in designing workflows. This approach is frequently based directly or loosely on AR development methodologies available in the scientific literature (e.g., IEEE ISMAR proceedings). Some companies choose to create their own application development environment because they: Plan to license it to third parties Have unique requirements not met by existing tools or environments Have determined that building an internal tool is less expensive than licensing it from a third party, over the lifetime of a project Approach 2 – BUYING Commercial software libraries for experienced software engineers who need to create AR projects are also available. For example, these can be in the form of AR Software Development Kits (SDKs) or libraries. The enterprise pays a license fee to the software publisher and usually receives maintenance and support services to ensure that the software is up to date. Vuforia is an example of an AR SDK licensed by PTC; it is popular with software engineers planning to: Create AR experiences triggered by 2D images and 3D objects Write or re-use a custom user interface Use a third-party application interface or “application engine” to develop user interfaces and interactivity (e.g., Unity) Use one or more third-party products to deliver a complete solution.

Other commercial software libraries used in enterprise AR projects include Inglobe Technologies ARmedia. It includes the following features: Advanced tracking (Face, SLAM, 2D and 3D) librarie Rendering engines Support for geolocation-based AR applications (using GPS and compass observations) Deployment on iOS, Android, Windows Off- and online applications Other companies offering AR SDKs include Layar (acquired by Blippar), Wikitude, String, 13th Lab (acquired by Oculus), Seac02, ARPA Solutions, Total Immersion, Hewlett Packard, Diotasoft, Infinity AR, MobLabs, ARToolworks (acquired by DAQRI) and Catchoom. A page on the SocialCompare site provides comparisons of features of AR SDKs from these companies and others (note that some projects listed on the SocialCompare site are no longer available). Research groups offering AR SDKs as part of their projects include VTT, Fraunhofer IGD and HIT Labs NZ.


In cases where SDKs do not include libraries for

Target Feature Extraction

developing user interactions, developers can incorporate the Unity 3D game engine.

A third group of technologies that may be useful for AR

For AR developers with minimal or no programming skills,

developers are feature extraction libraries. These are

there are alternatives to using SDKs. Wikitude, Layar,

algorithms used as part of a tracking system and,

Metaio and Catchoom offer visual programming or “drag

therefore, are usually part of an AR SDK such as Vuforia,

and drop� interfaces for AR application development.

but they can also be developed or acquired separately.

These take less time and permit reuse of frequently used

There are many third-party solutions. For example, SLAM

tools and sophisticated libraries.

(Simultaneous Location and Mapping) libraries for AR are available from Flyby Media and Imperial College

Asset Databases In addition to using an application development environment, developers track assets and interactions using a database such as Microsoft SQL, IBM DB2 or Oracle, or with a Relational Database Management System, such as SAP HANA. At this time, only SAP has added AR-specific features to its HANA product line.

Cost benefit Analysis

London.OpenSlam site provide more information on sources of SLAM technology.


Ethical Issues on Augmented Reality Augmented Reality is one of the newest emerging technological inventions in today’s world. It offers promising improvement in the world once fully developed, from education, healthcare, military use, etc. However, for every innovation in today’s world, there are also a lot of questions or drawbacks for a new product. In the case of Augmented Reality, that would be the privacy and security of the people. Information is something that almost everyone is very particular with. People are very particular when it comes to giving out personal information. There were several laws enacted to protect the personal details of the individual. One would be the data protection act. Also, all organizations has their own version of ensuring data will not be leaked and will only be used based on the needs of the business. Millions of penalties and fines were imposed with several companies due to violation of privacy and personal details. Many places in the United States have banned the usage of Google Glass in public areas like beaches, restaurants, bars etc. There were several reports as well that Google glass users have been reportedly threatened for refusing to remove their sets in public places. In this scenario, it can be realized that most people seek and expect others to respect their private space. Personal space is very important to almost everyone. Just imagine yourself lining up in a grocery store when suddenly someone would stand few inches before you considering that there’s only two of your lining up in the counter. There’s this awkward and uneasy feeling that a certain individual would experience, what more if a certain individual knows that he/she is being captured real time for unknown reasons. AR has many challenges going forward and whilst the technology and issues around privacy and security are most mentioned there has been very little debate or discussion as to the socio-cultural impact AR platforms will have on us as people. It is one thing to change the mindset of a specific group of people within a confined or controlled environment but something completely different to

change the mindset of millions of people in the “real world”. The use of facial recognition technology, combined with geo-location and augmented data will lead to a seamless integration of our online and offline lives. These would lead to a person walking in the real world but can be seen in augmented reality as a person and personal information. Imagine yourself being out casted in a gathering by people wearing AR glasses just because they can see you in their AR glasses plus your religious affiliation. Probably one example would be in the airports, when they can screen you that you are a citizen of a Middle Eastern country, you might be single out for further inspection. Augmented Reality, like any mobile media technology presents some real physical safety issues. We all know that mobile phones are one of the best distractions in today’s world. What if the windshield of your car provides you with driving directions combined with other information about your surroundings, it would cause the driver distraction that could further lead to a major accident of the driver and worst, the pedestrians. Nation Building Augment reality benefits the education and medicine industry best. Imagine in education where students would be able to see actual movements and 3D view of their subject and topic. This would lead to eye catching presentations, interactive lessons, portable and less expensive learning materials, higher retention of information, foster intellectual curiosity. In medicine, augmented reality can be used as well. In Netherlands, someone developed an augmented reality showing the location of defibrillators and other life saving devices. Google glass can show people an actual demonstration of certain activities, example would be CPRs, proper way of doing breastfeeding etc. Patients can also describe their symptoms using augmented reality.AR can also assist Surgeons in operating rooms. A lot of positive things can be done with the use of augmented reality. A picture will no longer be a picture but a #D representation of the subject.


Augmented Reality; Forecast for the Future

 

Comparing AR


3D Bioprinting

Technical Environment In the Gartner's 2015 Hype Cycle for Emerging Technologies, it was mentioned that Digital business is the first post-nexus stage on the roadmap and focuses on the convergence of people, business and things. The Internet of Things (IoT) and the concept of blurring the physical and virtual worlds are strong concepts in this stage. Physical assets become digitalized and become equal actors in the business value chain alongside alreadydigital entities, such as systems and apps. Enterprises seeking to go past the Nexus of Forces technologies to become a digital business should look into additional technologies.1 3D Bioprinting for Life Science R&D and 3D Bioprinting Systems for Organ Transplant were mentioned as examples of these additional technologies. As shown in the illustration below, 3D Bioprinting Systems for organ Transplant is a fast emerging technology that will be fully implemented in 5 to 10 years.


3D Bioprinting History 

3D Printing was introduced in 1983, when Charles Hull invented the stereolithography. This special type of printing relied on a laser to solidify a polymer material extruded from a nozzle. The instructions for the design came from an engineer, who would define the 3-D shape of an object in computer-aided design (CAD) software and then send the file to the printer. Hull and his colleagues developed the file format, known as .stl, that carried information about the object's surface geometry, represented as a set of triangular faces. By the early 1990s, 3D Systems had begun to introduce the next generation of materials -- nanocomposites, blended plastics and powdered metals. These materials were

more durable, which meant they could produce strong, sturdy objects that could function as finished products, not mere stepping-stones to finished products. Scientists went on the hunt for such materials and by the late 1990s, they had devised viable techniques and processes to make organ-building a reality. In 1999, scientists at the Wake Forest Institute for Regenerative Medicine used a 3-D printer to build a synthetic scaffold of a human bladder. They then coated the scaffold with cells taken from their patients and successfully grew working organs. This set the stage for true bioprinting. In 2002, scientists printed a miniature functional kidney capable of filtering blood and producing urine in an animal model. And in 2010, Organovo -- a bioprinting company headquartered in San Diego -- printed the first blood vessel.2

HOW 3D BIOPRINTING WORKS 3 In the review article written by Murphy and Atala, 3D Bioprinting is three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and tissues. Figure 1 shows the typical process of 3D Bioprinting4 

supporting components into complex 3D functional living


3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with nonbiological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology. Imaging of the damaged tissue and its environment can be used to guide the design of bioprinted tissues. Biomimicry, tissue self-assembly and mini-tissue building blocks are design approaches used singly and in combination. The choice of materials and cell source is essential and specific to the tissue form and function. Common materials include synthetic or natural polymers and decellularized ECM. Cell sources may be allogeneic or autologous. These components have to integrate with bioprinting systems such as inkjet, microextrusion or laserassisted printers. Some tissues may require a period of maturation in a bioreactor before transplantation.

Alternatively the 3D tissue may be used for in vitro applications. (a) Thermal inkjet printers electrically heat the printhead to produce air-pressure pulses that force droplets from the nozzle, whereas acoustic printers use pulses formed by piezoelectric or ultrasound pressure. (b) Microextrusion printers use pneumatic or mechanical (piston or screw) dispensing systems to extrude continuous beads of material and/or cells. (c) Laser-assisted printers use lasers focused on an absorbing substrate to generate pressures that propel cell-containing materials onto a collector substrate.

Ideal material properties for Bioprinting The selection of appropriate materials for use in bioprinting and their performance in a particular application depend on several features. These are listed below. Printability Properties that facilitate handling and deposition by the bioprinter may include viscosity, gelation methods and rheological properties.

Figure 2 shows the Components of inkjet, microextrusion and laser-assisted bioprinters5, 6


Biocompatibility Materials should not induce undesirable local or systemic responses from the host and should contribute actively and controllably to the biological and functional components of the construct.

Degradation kinetics and byproducts Degradation rates should be matched to the ability of the cells to produce their own ECM; degradation byproducts should be nontoxic; materials should demonstrate suitable swelling or contractile characteristics.

Structural and mechanical properties Materials should be chosen based on the required mechanical properties of the construct, ranging from rigid thermoplastic polymer fibers for strength to soft hydrogels for cell compatibility.

Material biomimicry Engineering of desired structural, functional and dynamic material properties should be based on knowledge of tissuespecific endogenous material compositions

Top 10 3D Bioprinters7

1.ENVISIONTEC’S 3D BIOPLOTTER MANUFACTURER SERIES + DEVELOPER SERIES Technology: syringe-based extrusionMaterials:

2.ORGANOVO’S NOVOGEN MMX Technology: syringe based extrusion Materials: cellular hydrogel


3.REGENHU’S 3DDISCOVERY + BIOFACTORY

5.BIOBOTS BIOBOT1

Technology: syringe based extrusion Materials: bio ink, osteo ink

Technology: syringe-based extrusion, blue light technology Materials: agarose, collagen, alginate,

6.CELLINK INKREDIBLE

4.3D BIOPRINTING SOLUTIONS’ FABION

Technology: syringe-based extrusion Materials: CELLINK+ (improved CELLINK for chondrogenic differentiation), CELLINK A

Technology: multiple (photocuring, electromagnetic and extrusion) Materials: hydrogel, organoids

7.OUROBOTICS REVOLUTION

Technology: syringe-based extrusion


8.ADVANCED SOLUTIONS’ BIOASSEMBLYBOT Technology: six-axes syringe based extrusion

9.GESIM’S BIOSCAFFOLDER 2.1

10.3DYNAMIC SYSTEMS’ ALPHA & OMEGA

Technology: syringe based extrusion and piezoelectric nanoliter pipetting Materials: polymers, high viscosity paste materials, alginate, calcium phosphate, silicon, cells and protein

Technology: syringe based extrusion Materials: bone tissue from PCL, PLA, PGA, PEG, fibrin elastin, collagen, calcium phosphate and hydrogel mixtures

Trends in 3D Bioprinting8 In the online article 3D Bioprinting Market Worth $1.82 Billion By 2022: Grand View Research, Inc. it was mentioned that Global 3D bioprinting market is expected to reach USD 1.82 billion by 2022, according to a new report by Grand View Research Inc. Rising prevalence of chronic diseases such as Chronic Kidney Disease (CKD) which demands kidney transplantation is expected to boost the market growth, as 3D bioprinting is convenient and cost effective substitute for organ transplantation. Emerging medical applications of 3D bioprinting such as toxicity testing, drug discovery, tissue engineering and consumer product testing are expected to further drive the market growth positively. Increasing R&D expenditure is anticipated to be high impact rendering driver for the industry. 3D bioprinting advances in tissue engineering and allows developments to be done using biomaterials which have bette biocompatibility. Moreover, rising geriatric population is expected to assist the market growth, as this demographic is highly susceptible to age related organ deformities.


SWOT Analysis

BUSINESS/INDUSTRY/EDUCATION APPLICATION Bioprinting Tissues and Organs Tissue or organ failure due to aging, diseases, accidents, and birth defects is a critical medical problem. Current treatment for organ failure relies mostly on organ transplants from living or deceased donors. However, there is a chronic shortage of human organs available for transplant. In 2009, 154,324 patients in the U.S. were waiting for an organ. Only 27,996 of them (18%) received an organ transplant, and 8,863 (25 per day) died while on the waiting list. As of early 2014, approximately 120,000 people in the U.S. were awaiting an organ transplant. Organ transplant surgery and follow-up is also expensive, costing more than $300 billion in 2012. An additional problem is that organ transplantation involves the often difficult task of finding a donor who is a tissue match. This problem could likely be eliminated by using cells taken from the organ transplant patient’s own body to build a replacement organ. This would minimize the risk of

tissue rejection, as well as the need to take lifelong immunosuppressants. Therapies based on tissue engineering and regenerative medicine are being pursued as a potential solution for the organ donor shortage. The traditional tissue engineering strategy is to isolate stem cells from small tissue samples, mix them with growth factors, multiply them in the laboratory, and seed the cells onto scaffolds that direct cell proliferation and differentiation into functioning tissues. Although still in its infancy, 3D bioprinting offers additional important advantages beyond this traditional regenerative method (which essentially provides scaffold support alone), such as: highly precise cell placement and high digital control of speed, resolution, cell concentration, drop volume, and diameter of printed cells. Organ printing takes advantage of 3D printing technology to produce cells, biomaterials, and cell-laden biomaterials individually or in tandem, layer by layer, directly creating 3D tissue-like structures. Various materials are available to build the scaffolds, depending on the


desired strength, porosity, and type of tissue, with hydrogels usually considered to be most suitable for producing soft tissues. Although 3D bioprinting systems can be laser-based, inkjet-based, or extrusion-based, inkjet-based bioprinting is most common. This method deposits “bioink,” droplets of living cells or biomaterials, onto a substrate according to digital instructions to reproduce human tissues or organs. Multiple printheads can be used to deposit different cell types (organ-specific, blood vessel, muscle cells), a necessary feature for fabricating whole heterocellular tissues and organs. A process for bioprinting organs has emerged: 1) create a blueprint of an organ with its vascular architecture; 2) generate a bioprinting process plan; 3) isolate stem cells; 4) differentiate the stem cells into organspecific cells; 5) prepare bioink reservoirs with organ-specific cells, blood vessel cells, and support medium and load them into the printer; 6) bioprint; and 7) place the bioprinted organ in a bioreactor prior to transplantation. Laser printers have also been employed in the cell printing process, in which laser energy is used to excite the cells in a particular pattern, providing spatial control of the cellular environment. Although tissue and organ bioprinting is still in its infancy, many studies have provided proof of concept. Researchers have used 3D printers to create a knee meniscus, heart valve, spinal disk, other types of cartilage and bone, and an artificial ear. Cui and colleagues applied inkjet 3D printing technology to repair human articular cartilage. Wang et al used 3D bioprinting technology to deposit different cells within various biocompatible hydrogels to produce an artificial liver. Doctors at the University of Michigan published a case study in the New England Journal of Medicine reporting that use of a 3D printer and CT images of a patient’s airway enabled them to fabricate a precisely modeled, bioresorbable tracheal splint that was surgically implanted in a baby with tracheobronchomalacia. The baby recovered, and full resorption of the splint is expected to occur within three years. A number of biotech companies have focused on creating tissues and organs for medical research. It may be possible to rapidly screen new potential therapeutic drugs on patient tissue, greatly cutting

research costs and time. Scientists at Organovo are developing strips of printed liver tissue for this purpose; soon, they expect the material will be advanced enough to use in screening new drug treatments. Other researchers are working on techniques to grow complete human organs that can be used for screening purposes during drug discovery. An organ created from a patient’s own stem cells could also be used to screen treatments to determine if a drug will be effective for that individual. Drug Discovery

Drug discovery is a highly expensive process which in most cases will end in failure to gain regulatory clearance. The reason for this high failure rate is related to the lack of sufficiently accurate pre-clinical (prior to human volunteer) testing methodologies which have to date been limited to 2-dimensional human cell assays together with animal testing. Different species can react to different drugs in very different ways, and further, 2-dimendional cell cultures behave very differently in terms of coalescence and proliferation compared to cells which inhabit a 3-dimensional environment. In short, humans are not 2-dimensional 70kg mice. For some time therefore, medical researchers have sought means to mimic the 3-dimensional human tissue environment in the laboratory in an effort to make the drug discovery process more reliable, thereby (a) reducing complications associated to human clinical trials of novel drugs, (b) lowering the costs resulting from late-stage failures, (c) ensuring that dead-ends are abandoned quickly in order that attention can be focused on more promising avenues, and (d) shortening the drug discovery process timescale so that potentially life-saving drugs make it to the market as soon as possible. Development of 3D assays has remained a challenge however, as the degree of precision required to emulate cell-to-cell communication in vivo (in the body) has proved elusive. Computer controlled 3D bioprinting, combined with curable bioinks, has now enabled the fabrication of 3D tissue, which moreover can survive


for significantly longer periods of time compared to their 2D counterparts, enabling longer term impact of a novel drug on human tissue cultures to be analysed.

missing bone. They are porous because the intent is for bone to grow in and through the scaffold. Over time, the scaffold goes away because it is bioresorbable.”

Cosmetic/consumer product testing

The technology allows for patient-specific implants. The company can take an imaging study that shows the exact shape and geometry of the missing bone. “We can turn this image into an engineering drawing that drives the creation of a custom implant that is a perfect fit for what bone is missing,” Mr. Fitzsimmons says. “The implant is porous which allows the bone a place to grow. We have been able to engineer these open spaces so that the implant can bear load, which is a novel attribute for these implants.”

In 2013 the European Union (EU) enforced new legislation banning the use of animal testing on all personal consumer products. No such product, or any ingredient thereof, may be tested on animals, and no product/ingredient which has been tested on animals outside of the EU may be retailed within the EU. This has proved a major driver for companies in this sector to seek new means of testing the safety of their new products, not least as the EU represents the largest single market for cosmetics and other such products. For example, in October 2013, the world's largest cosmetic company, L'Oreal, entered into an agreement with 3D bioprinting company Organovo to explore the use of 3D bioprinting for cosmetic safety testing, specifically skin care products. Tissue grafts/implants

The longer term holy grail of 3D bioprinting is the ability to be able to print viable human tissue for grafting or implant into the human body. Research is already underway looking at the 3D bioprinting of non-vascular tissue (thin tissue which does not require a network of nutrient delivering capillaries) such as skin and cartilage. Work in this area is expected to commence clinical trials in the immediate future and will reduce the need for mechanical implants and human donors. 3D Printing for Bone Replacement11 One company working in the orthopedic space is Tissue Regeneration Systems. The company has a technology platform for skeletal reconstruction and bone regeneration that was licensed from the Universities of Michigan and Wisconsin. Jim Fitzsimmons, president and CEO of Tissue Regeneration Systems, says there are two key parts of the technology platform. “That first part is an ability to use 3D printing manufacturing to construct complex implant scaffolds from a biosorbable material called PCL,” he says. “These implants are intended to replace

The second part of the technology is a process to grow a bone mineral coating on the surfaces of the implant and internal pores. It transforms a polymer implant into an implant whose surface areas are bone-like, which becomes a much better substrate for bone growth and attachment to the implant. “A patient’s bone will start forming and growing into and throughout the implant scaffold,” Mr. Fitzsimmons says. “Adherence, growth, and attachment are more robust when the bone thinks it is attaching to bone.”

Regenerative Scaffolds and Bones

A further research team with the long-term goal of producing human organs-on-demand has created the Envisiontec Bioplotter. Like Organovo's NovoGen MMX, this outputs bio-ink 'tissue spheroids' and supportive scaffold materials including fibrin and collagen hydrogels. But in addition, the Envisontech can also print a wider range of biomaterials. These include biodegradable polymers and ceramics that may be used to support and help form artificial organs, and which may even be used as bioprinting substitutes for bone. Talking of bone, a team lead by Jeremy Mao at the Tissue Engineering and Regenerative Medicine Lab at Columbia University is working on the application of bioprinting in dental and bone repairs. Already, a bioprinted, mesh-like 3D scaffold in the shape of an incisor has been implanted into the jaw bone of a rat.


This featured tiny, interconnecting microchannels that contained 'stem cell-recruiting substances'. In just nine weeks after implantation, these triggered the growth of fresh periodontal ligaments and newly formed alveolar bone. In time, this research may enable people to be fitted with living, bioprinted teeth, or else scaffolds that will cause the body to grow new teeth all by itself. You can read more about this development in this article from The Engineer.

surgical arms tipped with bioprint heads to enter the body, repair damage at the cellular level, and then also repair their point of entry on their way out Patients would still need to rest and recuperate for a few days as bioprinted materials fully fused into mature living tissue. However, most patients could potentially recover from very major surgery in less than a week.

In another experiment, Mao's team implanted bioprinted scaffolds in the place of the hip bones of several rabbits. Again these were infused with growth factors. As reported in The Lancet, over a four month period the rabbits all grew new and fully-functional joints around the mesh. Some even began to walk and otherwise place weight on their new joints only a few weeks after surgery. Sometime next decade, human patients may therefore be fitted with bioprinted scaffolds that will trigger the grown of replacement hip and other bones. In a similar development, a team from Washington State University have also recently reported on four years of work using 3D printers to create a bone-like material that may in the future be used to repair injuries to human bones.

Cost-Benefit Analysis

Already a team of bioprinting researchers lead by Anthony Alata at the Wake Forrest School of Medicine have developed a skin printer. In initial experiments they have taken 3D scans of test injuries inflicted on some mice and have used the data to control a bioprint head that has sprayed skin cells, a coagulant and collagen onto the wounds. The results are also very promising, with the wounds healing in just two or three weeks compared to about five or six weeks in a control group. Funding for the skinprinting project is coming in part from the US military who are keen to develop in situ bioprinting to help heal wounds on the battlefield. At present the work is still in a pre-clinical phase with Alata progressing his research using pigs. However, trials of with human burn victims could be a little as five years away. The potential to use bioprinters to repair our bodies in situ is pretty mind blowing. In perhaps no more than a few decades it may be possible for robotic

Total Cost of Ownership

3d Bioprinter Cost: RegenHU 3DDiscovery and Biofactory - $200,000


Direct and Indirect/ Strategic Benefits DIRECT The greatest advantage that 3D printers provide in medical applications is the freedom to produce custom-made medical products and equipment. For example, the use of 3D printing to customize prosthetics and implants can provide great value for both patients and physicians. In addition, 3D printing can produce made-to-order jigs and fixtures for use in operating rooms. Custom-made implants, fixtures, and surgical tools can have a positive impact in terms of the time required for surgery, patient recovery time, and the success of the surgery or implant. It is also anticipated that 3D printing technologies will eventually allow drug dosage forms, release profiles, and dispensing to be customized for each patient. Another important benefit offered by 3D printing is the ability to produce items cheaply. Traditional manufacturing methods remain less expensive for large-scale production; however, the cost of 3D printing is becoming more and more competitive for small production runs. This is especially true for small-sized standard implants or prosthetics, such as those used for spinal, dental, or craniofacial disorders. The cost to custom-print a 3D object is minimal, with the first item being as inexpensive as the last. This is especially advantageous for companies that have low production volumes or that produce parts or products that are highly complex or require frequent modifications. “Fast� in 3D printing means that a product can be made within several hours. That makes 3D printing technology much faster than traditional methods of making items such as prosthetics and implants, which require milling, forging, and a long delivery time. In addition to speed, other qualities, such as the resolution, accuracy, reliability, and repeatability of 3D printing technologies, are also improving.

Another beneficial feature offered by 3D printing is the democratization of the design and manufacturing of goods. An increasing array of materials is becoming available for use in 3D printing, and they are decreasing in cost. This allows more people, including those in medical fields, to use little more than a 3D printer and their imaginations to design and produce novel products for personal or commercial use.

INDIRECT/STRATEGIC Tissues created can be used in testing cosmetics and drugs making it safer for pharmaceutical and cosmetic companies. Tissues can also be exposed to different viruses and bacteria helping in the advancement of pathology The medical applications segment is expected to gain substantial share of over 32.0% by 2022 which can be attributed to emerging applications of 3D bioprinting such as toxicity screening, organ transplants and medical pills. Asia Pacific is expected to witness lucrative growth over the forecast period. Favorable government support to improve healthcare delivery with economic developments in emerging countries such as India and China are expected to increase the demand for 3D bioprinting. Dental application segment is expected to witness lucrative growth over the forecast period. Rising demand for cosmetic dentistry along with growing awareness about oral hygiene are expected to foster the demand for dental applications.


Financial Analysis Return on Investment

Cost of 3d bioprinter - $200,000 Revenue from 3d bioprinting a kidney - $221,000*** based on a research conducted by the University of Singapore Estimated Operational and Admin Expense $100,000 Forecasted demand for kidneys – 1 every month

Source: www.organovo.com Return on Assets Return on Equity Net Income/Loss = $ (38,575) Net Income/Loss = $ (38,575)

Average Total Assets = ($53489+$67576)/2 Average Common Stocks = ($82+$92)/2 = $60532.5 = $87 Return on Assets = (64%) Return on Equity = (44339%)


Ethical Implications 3D bioprinting raises some ethical dilemmas. While a lot strongly support its use as it could improve quality of life, repair injuries in many individuals by printing repairs and replacements with their own tissues, cure diseases, reduce or eliminate the use of animal testing in medical laboratories it raises several ethical questions that needs to be considered as this technology develops. On an article from http://www.abc.net.au/science/ articles/2015/02/11/4161675.htm it discusses three ethical issues such as: justice in access to health care, testing for safety and efficacy and whether these technologies should be used to enhance the capacity of the individuals beyond what is “normal” for humans. As cited on the article below are the implications: Justice and access One major concern about the development of personalised medicine is the cost of treatments. Until recently it has been thought that advances in personalised medicine go hand-in-hand with increasing disparities in health between rich and poor. Should these treatments only be available to those who can pay the additional cost? If so, then those patients who lack financial resources may not receive effective treatments that others can access for a range of serious conditions. Personalised medicine is most closely associated with research in genomics and stem cell therapies. Advantages of personalising medicine are most obvious in cases where the condition affects patients in very different ways and standardised treatments offer imperfect benefits. For example, conditions affecting the growing bones of children are among those where personalising treatments, if these can be adapted to the rapidly changing bodies of children, can make a very big difference in the child's comfort and capacity to participate in ordinary childhood activities and play. Until recently, the cost and time required to provide a series of customised prostheses of different sizes for a child who has lost a leg to cancer, for example, has been prohibitive for many patients. 3D printing will bring down the time and cost of customising and

producing prosthetic legs. In cases like that of Ben Chandler, printers can also be used for implants, which might avoid the need to amputate the original limb, even where significant bone loss has occurred. The capacity to use 3D printing technology to substantially reduce the cost of prosthetics, or orthopaedic surgery to restore lost bone structures, means that this area of personalised medicine can avoid the criticism that personalised medicine inevitably increases the cost of health care and puts effective personalised treatments out of the reach of many patients.

Will 3D printing treatments be safe? A second ethical concern about any new treatment, including the use of 3D printing, is how we can test that the treatment is safe and effective before it is offered as a clinical treatment. In the case of 3D printing to replace bone, the materials used — for example titanium — are those already used for orthopedic surgery, and have been tested for safety over a long period and with many patients, so it is unlikely that there are new risks from the materials. In the future, 3D printing may be used in combination with stem cell derived cell lines. This could lead to the development of printed functioning organs that can replace a patient's damaged organ, but without the risk or rejection associated with donor organs, because it uses that patient's own cells. How can we know in advance that these treatments are safe? Unlike the case of developing a new drug, a stem cell therapy can't be tested on a sizable number of healthy people prior to being tested on


a sizable number of healthy people prior to being tested on patients and then, finally, being made available as a standard treatment. The point of using a patient's own stem cells is to tailor the treatment quite specifically to that patient, and not to develop a treatment that can be tested on anybody else. Re s e a r c h e r s c o m b i n i n g 3 D p r i n t i n g w i t h personalized stem cell therapies beyond the experimental stage will need to develop new models for testing their treatments for safety and effectiveness. Regulatory bodies that give approval for new treatments, such as Australia's Therapeutic Goods Administration (TGA), will also need to establish new standards of testing for regulatory approval before these treatments can become readily available. This means that even if researchers were ready to print a functioning prosthetic organ, it will be quite some time before patients with kidney disease should expect to be offered a 3D printed prosthetic kidney that uses their stem cells as a routine treatment.

Human enhancement The third issue is whether or not we should use 3D printing for human enhancement. If the technology can be used to develop replacement organs and bones, couldn't it also be used to develop human capacities beyond what is normal for human beings? For example, should we consider replacing our existing bones with artificial ones that are stronger and more flexible, less likely to break; or improving muscle tissue so that it is more resilient and less likely to become fatigued, or implanting new lungs that oxygenate blood more efficiently, even in a more polluted environment? The debate about human enhancement is familiar to the context of elite sport where athletes have sought to use medical technology to extend their speed, strength or endurance beyond what is 'natural', or what they are able to achieve without drugs or

supplements. In that context use of performance enhancing drugs is considered to cheat other athletes, unbalancing the level playing field. In the case of 3D bioprinting enhancement of human capacities could be associated with the military use of the technology and the idea that it would be an advantage if our soldiers were less susceptible to being wounded, fatigued or harmed in battle. While it would be preferable for military personnel to be less vulnerable to physical harm, the history of military technology suggests that 3D printing could lead to a new kind of arms race. Increasing the defenses that soldiers have in the face of battle would lead to increasing the destructive power of weapons to overcome those defenses. And in so doing, increasing the harm to which civilians are exposed. In this way 3D printing, may open a new gap in the vulnerabilities of "enhanced" combatants and civilians, at a time when the traditional moral rules concerning warfare and legitimate targets is muddied by terrorism and insurgency.

Stem Cell Research Stem cells researches also raises concerns. Stem cells are undifferentiated biological cells that can differentiate into specialized cells and can divide (though mitosis) to produce more stem cells. There are three types of stem cells: (1) Embryonic stem cells (2) Adult stem cells and (3) Umbilical Cord Blood cells. The use of human embryonic stem cells has raised controversies. When scientists learned how to remove stem cells from human embryos in 1998 they got excited because of the huge potential of these cells in curing human diseases but moral controversies arise as with the destruction of human embryos. However, in 2006 scientists where able to discover on how to stimulate a patient's own cells to behave like embryonic stem cells. This reduced the need of embryonic stem cells for researches. See image below from http://learn.genetics.utah.edu/content/ stemcells/scissues/ regarding replacement of embryonic stem cells with IPS cells.


The ethical questions raised with regards to human embryo stem cells are the following: • Does life begin at fertilization, in the womb, or

at birth? • Is a human embryo equivalent to a human

child? • Does a human embryo have any rights? • Might the destruction of a single embryo be

Both human embryonic stem (hES) cells and induced pluripotent stem (iPS) cells are pluripotent: they can become any type of cell in the body. While hES cells are isolated from an embryo, iPS cells can be made from adult cells.

justified if it provides a cure for a countless number of patients? •

cells with IPS cells .

Nation-Building Implications16 3D printing technology has been applied first in medicine in early 2000s to make dental implants and custom prosthetics and since then the medical applications have evolved considerably. The use of 3D printing has could produce ears, bones, exoskeletons, windpipes, jaws, cell cultures, stem cultures, organs, tissues as well as novel dosage forms and drug delivery services. In a study conducted the demand for organ transplant in the US is showing an upward trend. In 115, 000 people who currently need organ transplants there were 10 people who die every day while waiting for their transplant (see graph below). 3D Bioprinting technology will definitely help to bridge the gap because there’s no need to wait for organ donors. In the article, Medical Applications for 3D Printing: Current and Projected Uses9, it mentioned that organ transplant surgery and follow-up is expensive, costing more than $300 billion in 2012.additional problem is that organ transplantation involves the often difficult task of finding a donor who is a tissue match.problem could likely be eliminated by using cells taken from the organ transplant patient’s own body to build a replacement organ.would minimize the risk of tissue rejection, as well as the need to take lifelong immunosuppressants In an article Toward the Printed World: Additive Manufacturing and Implications for National Security by Connor M. McNulty, Neyla Arnas, and Thomas A. Campbell it mentioned that contributions to health care have a direct bearing on U.S. national security. Technological developments now allow for the efficient production of human tissue, which, under certain conditions, can be applied directly to a wound or infection. This could dramatically change the way battlefield injuries are treated and reduce the number of fatalities during combat operations. Source: www.ivhn.org


References: 1. STAMFORD, Conn., Gartner's 2015 Hype Cycle for Emerging Technologies Identifies the Computing Innovations That Organizations Should Monitor [http://www.gartner.com/newsroom/id/3114217], August 18, 2015 2. WILLIAM HARRIS, How 3-D Bioprinting Works [http://health.howstuffworks.com/medicine/moderntechnology/3-d-bioprinting1.htm] 3. Sean V Murphy and Anthony Atala, 3D Bioprinting of Tissues and Organs, August 2014 4. Self-assembly image is reprinted from Mironov, V. et al. Organ printing: tissue spheroids as building blocks. Biomaterials 30, 2164–2174 (2014), with permission from Elsevier; mini-tissue image is reprinted from Norotte, C. et al. Scaffold-free vascular tissue engineering using bioprinting. Biomaterials 30, 5910–5917 (2009), with permission from Elsevier; the ECM image is adapted from ref. 132, with permission from Wiley; differentiated cells image is reprinted from Kajstura, J. et al. Evidence for human lung stem cells. N. Engl. J. Med. 364, 1795–1806 (2011), Massachusetts Medical Society, with permission from Massachusetts Medical Society; laser-assisted image is reprinted from Guillemot, F. et al. High-throughput laser printing of cells and biomaterials for tissue engineering, Acta Biomater. 6, 2494–2500 (2010), with permission from Elsevier. 5. Nature Biotechnology 32, 773–785 (2014) doi:10.1038/nbt.2958, Published online 05 August 2014 6. Malda, J. et al. 25th anniversary article: engineering hydrogels for biofabrication. Adv. Mater. 25, 5011–5028 (2013) 7. Davide Sher, Top 10 Bioprinters [https://3dprintingindustry.com/news/top-10-bioprinters-55699/], August 26, 2015 8. Grand View Research, Inc., 3D Bioprinting Market Worth $1.82 Billion By 2022: Grand View Research, Inc. [http://www.prnewswire.com/news-releases/3d-bioprinting-market-worth-182-billion-by-2022-grand-viewresearch-inc-554555461.html], Nov 26, 2015 9. C. Lee Ventola, Medical Applications for 3D Printing: Current and Projected Uses [https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC4189697/], Oct 2014 10. PR Newswire, 3D Bioprinting 2014-2024: Applications, Markets, Players [http://www.prnewswire.com/ news-releases/3d-bioprinting-2014-2024-applications-markets-players-300196725.html], Dec 22, 2015 11. Bioprinting [http://www.explainingthefuture.com/bioprinting.html] 12. Waldo C, Bioprinting,[http://ethicsof3dprinting.weebly.com/bioprinting.html], September 25, 2012 13. 3D bioprinting: A new insight into the therapeutic strategy of neural tissue regeneration [https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC4879895/], December 28, 2015 14. Jeffrey Funk, 3D Bio-Printing; Becoming Economically Feasible [http://www.slideshare.net/Funk98/3dbioprinting-becoming-economically-feasible, November 21, 2013 15. Learn Genetics: The Stem Cell Debate: Is it Over? [http://learn.genetics.utah.edu/content/stemcells/ scissues/] 16. Connor M. McNulty, Neyla Arnas, and Thomas A. Campbell, Toward the Printed World: Additive Manufacturing and Implications for National Security, INSS Defense Horizons [http://www.dtic.mil/dtic/tr/fulltext/ u2/a577162.pdf], Semptember 2012


Cloud Computing TECHNOLOGY DESCRIPTION Cloud computing refers to the applications delivered as services over the internet and the hardware and software in the datacenters that provide those services. Cloud refers to the datacenter hardware and software. The US National Institute of Standards and Technology defines cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (example networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort for service provider interaction.

Definition • Access to applications and data from any network device • Computational resources accessible via a computer

network, rather from a local computer • Any computer connected to the internet has access

to files and applications • Users access personal files, play games, open

applications on a remote server • “cloud” is a metaphor to represent the abstraction

of the internet


S.W.O.T.

Opportunities

Strengths

Simple and convenient way to store files draw in users. – A spike of interest in 2013 was mainly caused by the advantages cloud computing has, mainly its high computing power and cheap service costs. Agility and flexibility. – Cloud computing may be considered and even used by people who need a fast, simple, and efficient way to store files that can be easily accessed. This type of computing may be the best choice for many people, drawing in more users to cloud computing.

Reduces the need for on-site hardware and physical software – With cloud computing, the user may store their files onto the servers so they may access it no matter the location. This reduces the need to purchase physical software for storage as cloud computing can also do this task with cheap cost of services. It is also more convenient than physical storage in the sense that it can be accessed anywhere with an access to networking infrastructure. Cost-effectiveness. – It is possible for a user to buy only the cloud services they need, with the option to upgrade their services later, matching and even outdoing the worth and quality of physical equipment. This helps in reducing the chances of the user making unnecessary investments in physical equipment that may break down or get misplaced. Fewer maintenance concerns. – Cloud computing does not require the download of any programs needing virus scans, nor does it require to be maintained by the user. It can also be accessed anywhere with proper Internet connection. This is much preferred in comparison to needing various virus scans and other types of maintenance to repair and maintain the quality of physical equipment.

Weaknesses Unavailability of service when Internet goes down. – As expected, cloud computing is an Internet-type of computing. Cloud computing is not entirely reliable, especially to one reliant on the Internet for their files, due to its unavailability should the Internet ever go down. Potential of difficulty when transferring files. – The process of file transferring from one device to another, although convenient, may be difficult. It may be a cause of poor Internet connection, or the type of files being sent, etc.

Threats Security risks – Cloud computing is run by thirdparty servers, meaning that any files stored into servers may be accessed by other people if these files are not handled by the users properly. The user may need to exercise safety precautions in storing files, such as not saving such with personal and classified information. Demands – The users’ needs in both cloud computing and demands vary and increase as time goes by, thus use of cloud computing may rise and fall. The companies usually need to improve their servers to make them more maintained and userfriendly for the public, which may be risky in terms of their user rate.


TRENDS IN TECHNOLOGY

APPLICATIONS

Cloud computing has been identified as one of the disruptors of the digital age. It tends to have more applications on the growing need for data analytics. Private providers like Amazon increasingly want more organizations to convert to cloud computing. Every provider is looking to enter and manage an organization’s data platform. Most companies, about 80%, have a cloud strategy. Cloud analytics will allow users to access and enjoy powerful data analytics from mobile phones. Security is no longer a top cloud challenge. Commercial and private providers have already find ways to curve threats on data security.

Commercial, private and government sectors have taken the advantage of cloud computing. Service providers have expanded their offerings to include the entire stack of hardware and software infrastructure, middleware platforms, applications, and turnkey applications. While on the private sector, it has helped improve resource utilization, increase their service response, and benefits in efficiency, agility, and innovation. For governments, it has th potential to increase operational efficiency and responding faster to citizens.

• Data Backup Service • Automated Marketing, Sales Schedule, Leads • Save Files • Business Intelligence • Shared HR Management System • Accounting Tools • Online Notes • Surveys

Introduction (Business Segment): The Rise of a New Phenomenon Some authors even refer to cloud computing as a new paradigm and emerging technology that flexibly offers IT resources and services over the Internet (Fenn et al. 2008). Also in this regard; cloud computing has the potential to revolutionize the mode of computing resource and application deployment, breaking up traditional value chains and making room for new business models. Evolved from large tabulating machines and mainframe architectures that centrally offered calculating resources via distributed and decentralized client-server architectures to personal computers, and eventually to ubiquitous, small personal (handheld) devices. Here is some of “Client Speak” proven facts that pushes their decision makers to open their point of views and action plans gearing towards and have chosen to venture into Cloud Computing (Source: Cloud Computing The Open Group): “We are unable to align capabilities with the needs of the business.”

“We need rapid access to different, and potentially game-changing, models of computing and new technologies. Without this, we could be left in the in the dust by competition.”


“I don’t want to invest in capital in very early stages – I want to wait until there are signs that the business will survive.” “There are huge risks associated with storing data that we aren’t competent to manage. We want to get rid of it but still access it through a competent authority.” “Our current environment is overlydependent on a few key individuals to support technology – a huge risk.” “We have no physical space – we’re full!”

We may have heard from early segment of this report (Technology Description) about the many benefits Cloud Computing has to offer including greater agility, lower cost, improved security, reduced risk, easier compliance with regulation, higher-quality IT support, and better business continuity. Analysts predict compound annual growth rates of over 25% in the cloud computing market. The underlying challenge in a form of question would be “Does this mean, my company can grow by 25% by using cloud computing? And, if so, how exactly do we do it?” Most of our collaborative INFOTE Group 3 research came to prove that cloud computing is a major development in IT, that it will grow as predicted, and that it can deliver real business benefits to companies. In addition to hardly considering the business potential of the new technology of cloud computing, the concept remains vague to many. With this midterm paperwork; • we aim to provide an understanding of the business aspects of cloud computing, thereby conceptually focusing on different characteristics and/or models in the value network that provide new computing resources. • we will summarize the core characteristics and suggest a comprehensive definition. • we will further examine the evolution from outsourcing to cloud computing as a new IT deployment paradigm.

Hereby this mid-term paper highlights the effects on the outsourcing value chain, summarizes market key-points and their roles within a new cloud computing value network, and finally and finally develops a generic value network of different key-points in the cloud computing line of business.

CONCEPT OF CLOUD COMPUTING The Characteristics and Model of Cloud Computing (Source: Cloud Computing The Open Group) “Cloud Computing” refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing. We use the term Private Cloud to refer to internal datacenters of a business or other organization, not made available to the general public. Thus, Cloud Computing is the sum of SaaS and Utility Computing, but does not include Private Clouds”. It is a complex concept. It is not based on a single technological break-through, but comes about through the combination of several innovations and improvements, most notably the development of virtualization, the increasing capacity of the Internet and the growing sophistication of Internet-based technologies. It has five essential characteristics: ondemand self-service, broad network access, resource pooling, rapid elasticity, and measured service. It has three service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It has four deployment models: private cloud, public cloud, community cloud, and hybrid cloud.


Layers of Cloud Computing The idea behind cloud computing is based on a set of many pre-existing and well researched concepts such as distributed and grid computing, virtualization or Software-as-a-Service (SaaS). Although, many of the concepts do not appear to be new, the real innovation of cloud computing lies in the way it provides computing services to the customer. Various business models have evolved in recent times to provide services on different levels of abstraction. These services include providing software applications, programming platforms, data storage or computing infrastructure. Classifying cloud computing services along different layers is common practice in the industry (Kontio 2009, Reeves et al. 2009, Sun Microsystems 2009). Wa n g e t a l . f o r e x a m p l e d e s c r i b e t h r e e complementary services, Hardware-as-a-Service (HaaS), Software-as-a-Service (SaaS) and Data-as-aService (DaaS). These services together form Platform-as-a-Service (PaaS), which is offered as cloud computing (Wang et al. 2008). In an attempt to obtain a comprehensive understanding of cloud computing and its relevant components, Youseff et al. (2008) were among the first who suggested a unified ontology of cloud computing. According to their layered model (see Figure 1), cloud computing systems fall into one of the following five layers: applications, software environment, software infrastructure, software kernel, and hardware. Each layer represents a level of abstraction, hiding the user from all underlying components and thus providing simplified access to the resources or functionality, which we will shortly describe in the following. When it comes to user interaction, the cloud application layer is the most visible layer to the end customer. It is usually accessed through web-portals and thus builds the front-end, the user interacts with when using cloud services. A Service in the application layer may consist of a mesh of various other cloud services, but appears as a single service to the end-customer. This model of software provision is also referred to as Software-as-a-Service (SaaS).

The cloud software environment layer (also called software platform layer) provides a programminglanguage environment for developers of cloud applications. The software environment also offers a set of well-defined application programming interfaces (API) to utilize cloud services and interact with other cloud applications. Thus developers benefit from features like automatic scaling and load balancing, authentication services, communication services or graphical user interface (GUI) components. However, as long as there is no common standard for cloud application development, lock-in effects arise, making the developer dependent on the proprietary software environment of the cloud platform provider. This service, provided in the software environment layer is also referred to as Platform-as-a-Service (PaaS). The cloud software infrastructure layer provides resources to other higher-level layers, which are utilized by cloud applications and cloud software platforms. The services offered in this layer are commonly differentiated into computational resources, data storage, and communication. Computational resources in this context are usually referred to as Infrastructure-as-a-Service (IaaS). Virtual Machines are the common form of providing computational resources to users, which they can fully administrate and configure to fit their specific needs.


Virtualization technologies can be seen as the enabling technology for IaaS, owing data center providers to adjust resources on demand, thus utilizing their hardware more efficiently. The down side of the medal is the lack of a strict performance allocation on shared hardware resources. Due to this, infrastructure providers can’t give strong performance guarantees which results in unsatisfactory service level agreements (SLA). In analogy to computational resources data storage within the cloud computing model is offered as DataStorage-as-a-Service (DaaS). DaaS allows users to obtain demand-flexible storage on remote disks which they can access from everywhere. Like for other storage systems, trade-offs must be made between the partly conflicting requirements: high availability, reliability, performance, replication and data consistency, which in turn are manifested in the service providers SLAs. A fairly new idea is Communication-as-a-Service (CaaS), which shall provide quality of service ensured communication capabilities such as network security, dedicated bandwidth or network monitoring. Audio and video conferencing is just one example of cloud applications that would benefit from CaaS. The software kernel layer represents the software management environment for the physical servers in the datacenters. These software kernels are usually implemented as OS kernel, hypervisor, virtual machine monitor or clustering middleware. Typically this layer is also the level where grid computing applications are deployed. At the bottom end of the layered model of cloud computing is the actual physical hardware, which forms the backbone of any cloud computing service offering. Hardware can also be subleased from datacenter providers to, normally, large enterprises. This is typically offered in traditional outsourcing plans, but in an as-a-service context also referred to as Hardware-as-a-Service (HaaS). With regard to the layered model of Youseff et al. (2008), described above, cloud computing can be perceived as a collection of pre-existing technologies and components. Therefore we see cloud computing as an evolutionary development and reconceptualization of the delivery model, rather than a disruptive technological innovation.

Evolution from Outsourcing to Cloud Computing The relation between cloud computing and outsourcing is best illustrated by taking current challenges of outsourcing into account: On the one hand, customers expect a cost-effective, efficient and flexible delivery of IT services from their service providers, at a maximum of monetary flexibility (i.e., pay-per-use models). At the same time, more and more customers demand innovations or the identification of a customer-specific innovation potential from their service providers (Leimeister et al. 2008). Out of these challenged and constraints posed by clients, the new phenomenon of cloud computing has emerged. Cloud computing aims to provide the technical basis to meet customer’s flexibility demands on a business level. Interestingly, new cloud computing offers to meet these business demands were first addressed by providers that have not been part of the traditional outsourcing market so far. New infrastructure providers, such as Amazon or Google, that were previously active in other markets, developed new business models to market their former by-products (e.g., large storage and computing capacity) as new products. With this move, they entered the traditional outsourcing value chain and stepped into competition with established outsourcing service providers. These new service providers offer innovative ways of IT provisioning through pay-per-use payment models and help customers to satisfy their needs for efficiency, cost reduction and flexibility. In the past the physical resources in traditional outsourcing models have been kept either by the customer or the provider. On the contrary, cloud computing heralds the paradigm of an asset-free provision of technological capacities. Decision Maker

Your business situation is either a problem or an opportunity for which you are seeking a solution that includes IT enablement. You see a technological possibility as the way to solve your problem, or


seize your opportunity. This is your architecture vision. The first essential step in establishing the vision is to ensure that you understand your business context. This starts by putting forward a set of considerations that will help you achieve that understanding.

proposal to use cloud computing in place of in-house IT, the following is how it is being assess. Further, cloud computing supports ROI yet there are number of fundamental drivers that impact on investment, revenue, cost, and timing that can be positively influenced by using cloud services. Cite Total Cost Ownership (CTO) - capital investment cost

COST BENEFIT ANALYSIS Cloud computing has become a fundamental shift in IT. It allows systems to be scalable and elastic. Cloud computing has allowed organizations and users not needing to own data centers. Cloud computing has become a powerful computing paradigm through resource allocation and has contributed to green IT through elimination of physical infrastructure and outputs. The cost benefits of cloud computing will take into consideraion various parameters both tangible and intangible such as cost of servers and its maintenance, power and cooling requirements, software, personnel, and facilities such as server space and racks that support it. For large organizations, asset utilization is not maximized on some while there are also fragmented demands. With cloud computing, resources are being pooled and will be shared across the whole organization thus resources that are under-utilized will have a fair share of the load. Return on Investment (ROI) is perhaps the most widely-used measure of financial success in business. If we are already in a stage where we are to make

(Source: Journal of International Technology and Information Management Volume 22, Number 3 2013) This mid-term paper comprehensively considers the entire lifetime spending, capital costs, cost of operations and hence is suitable for base cost estimation. A total of nine components haven been considered in base cost estimation including amortization, cost of servers, network cost, power cost, software cost, cooling cost, real estate cost, facility cost and support & maintenance cost. For each component, the following details are provided: a) explanation of all the variables involved and b) the method to calculate the cost of the component. The overall aim is to come up with monthly costs for all the components being considered and thus all variables are converted to monthly parameters.


Unless otherwise mentioned, the currency for all calculations is United States Dollars (USD) and the computations are made on monthly basis.

uninterruptable power supplies, fans, air conditioners, pumps, lighting etc. (Sawyer, 2004). Software Cost

Amortization

It is important to understand the contribution of IT infrastructure costs to the monthly rental structure in an organization. Hence, amortization parameter is calculated for servers and other facilities so that fair attribution of costs for various IT resources (hardware/software) can be brought about. This parameter is required to calculate the monthly depreciation cost (amortization cost) of each infrastructure item being considered. These items have initial purchase expense, the cost of which is calculated based on the duration over which the investment is amortized at assumed interest rate. Cost of Servers

Servers are generally mounted on racks and it is assumed that all the servers have similar configurations. Parameter for Server from previous sub-section, the costs other than base cost associated with the purchase of the server should be calculated separately. Network Cost

The components that contribute to the networking costs are NIC, switches, ports, cables, software and maintenance. The cost of NIC is already attributed in the server cost while that of the software will be taken up in the software cost section. Maintenance activities have also been taken up separately in form of Support and Maintenance Cost. Hence, Network Cost would only deal with the cost of switches, ports, cables and the implementation costs. Since cost associated with networking again has an initial expense, it is amortized to come up with the monthly cost. Power Cost

Few studies have quoted that power is the single largest cost in high scale data centers. Though the validity of the statement is debatable, power is clearly one of the fastest growing costs. The IT infrastructure that contributes to the power consumption in an organization includes computing infrastructure (server, switches etc.), network critical physical infrastructure, transformers,

In order to manage the data centers, it is required to install the operating system patches and resources for load balancing. The cost of software associated with the base cost estimation is due to license payment. There are two classes of software considered for cost analysis based on the license structure. Class A software includes operating system while Class B deals with other base software (Application Server, VM Software etc). Class B does not include the project specific software as it will be dealt in the layer addressing the project costs. Cooling Cost

Past research has shown that power consumed in data center is equivalent to the heat generation in it indicating the power rating and thermal output equivalency (Rasmussen, 2007). Real Estate Cost

This part follows the methodology of Li et al. (2009) in order to come up with monthly cost of real estate being used by IT infrastructure. Data centers take up considerable space and account for the real estate cost. Studies have shown that a 40W per square foot rated data center typically costs 400 Dollars per square foot (Anthes, 2005). However, there is a huge variation in prices based on various geographic locations and hence it has been taken as a generalized variable where area specific values can be captured by the organizations. Facility Cost

These represent both tangible and intangible components that are essential for the normal functioning of


the equipment. These facilities are wrapped into racks which hold the servers. Hence, the TCO of facilities can be computed by segregating them into racks Support and Maintenance Cost

Operational staff being the major category in an enterprise, the staff involved in maintenance of data centers is very small (Greenberg et al., 2008). The ratio of IT staff members to server is 1:100 in an established enterprise, automation is partial (Enck et al., 2009) and performance indicating problems are largely caused by the human error (Kerravala, 2002). After understanding the nature of support and maintenance cost in various organizations, it was found that majority of them outsource this job and the nature of job is documented in the contract.

ETHICAL IMPLICATIONS Cloud computing normally involves a third party operator, thus it is important to determine the new ethical challenges and questions posed on cloud computing. Some of these are outlined below. • Privacy of system users • Protection of customer data • Vendor security policies and system architecture • Vendor focus between small and big accounts

The most recognized rule in IT is to respect the privacy of users. Providers, IT personnel and administrators, and others often have access to employee email and personal files. There is no question that these people will need to access in order to trouble shoot issues. The user has put significant trust in these people to protect their data from theft or use. Providers have to be honest and provide more transparency as to their infrastructure covering security policies, and even system architecture. If a failure falls and is/are affecting other users, providers will need to be honest to discuss what happened even if most of the cloud facilities are selfservice in nature. Providers need to protect all customers and treat them fairly regardless of size or contribution to their business.

Summary of the components

A total of nine components have been described in this section. This includes eight cost components and one component of amortization that is used frequently in other subsection. The purpose of clearly describing each component individually is that its output(s) will be used for computation in other layers and for calculation of costs in cloud computing.


NATIONAL BUILDING IMPLICATIONS Cloud computing presents a huge potential for governments and for sectors in research and technology where sharing of ideas need to be impending and access to information has to be inexpensive. Below are the areas where it could help society. • Address inefficiencies in: • Low asset utilization • Fragmented demand for resources • Duplicate systems • Long procurement lead times • Increase operational efficiency • Faster response times • Allocate spending cost saved from cloud computing to more essential services (government) • Provide researchers access to IT services inexpensively •

Through a cloud community, researches can share information and access to IT services in minutes. Assets will be properly allocated and utilized. Through cloud, capacity will be evenly distributed.



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.