P.I.N.G. Issue 15.1

Page 1


Dr Rajesh Ingle

Branch Counsellor

Dear All, It gives me immense pleasure to write this message for the new edition of PICT IEEE Student Branch’s (PISB)’s P.I.N.G. The Credenz edition of P.I.N.G. is always special for all of us. This year we have an interesting theme ‘Evolution of Memory’ for Credenz ‘19. According to a recent survey conducted by PISB’s P.I.N.G. Editorial team, both electronic and print versions of P.I.N.G. are well received and appreciated by the readers. It is a great contribution by PICT IEEE Student Branch, which provides an opportunity for all, including student members to showcase their talent, views and further strengthen IEEE activities. It is a great pleasure to serve PISB as a Counsellor. It is a really interesting, valuable and great learning experience to work at various levels in IEEE. As a counsellor at PICT IEEE Student Branch; as a Chair, Conference Committee at IEEE Pune Section; Vice Chair, IEEE India Council, IEEE Region 10 Students Activity Coordinator; and Member, MGA SAC, I am thankful to all the members of PICT IEEE Student Branch for their active support. In January 2018, I had an opportunity to attend IEEE Region 10 meeting at Yangon, Myanmar. I also got an opportunity to participate in IEEE Region 10 Annual General meeting which was held at Berjaya Langkawi Resort, Malaysia on 3th & 4th March. I also had an opportunity to organize R10 Sywl Congress at Bali, Inodnesia, 30th August to 2nd September, 2018. I would also like to mention the strong support from Mr R.S. Kothavale, Managing Trustee, SCTR; Mr Swastik Sirsikar, Secretary, SCTR; Dr P.T. Kulkarni, Principal PICT and all the students who worked at this level. We try our level best to create an environment where students keep updating themselves with the emerging trends, technology and innovations. At PISB, many events are conducted throughout the year and widely appreciated by students, acclaimed academicians and industry professionals alike. The events include IEEE Day, workshops, Special Interest Group (SIG) activities, Credenz and Credenz Tech Dayz. I thank all the authors for their contribution and interest. On behalf of IEEE R10 & IEEE Pune Section, I wish PISB as well as this newsletter all the success. I congratulate the P.I.N.G. team for their commendable efforts. Prof. Dr Rajesh Ingle IEEE R10 (Asia Pacific) Student Activities Chair Vice Chair, IEEE India Council Dean and Professor, PICT


Flashback

Neven’s Law

Saniya Kaluskar, Ex-Editor, P.I.N.G.

A

nyone who has been in the IEEE student branch finds an experience she/ he cherishes beyond the four walls of the branch room. P.I.N.G. was one such experience for me. What started as a newsletter has now become a prestigious full-fledged magazine, and that only strengthens my belief in this magazine. My heart is full of warmth as I write the prestigious Flashback article for this year’s issue.

nostalgia

student the opportunity to speak her mind. And it gave an introvert person a platform to interact with the brightest minds around her. Every issue of P.I.N.G. is known to start a few new traditions and I have enjoyed starting my fair share. Initiating the interview series of industry experts to gap the bridge between academia and the industry was a thoroughly fulfilling experience. Bagging the first-ever dedicated P.I.N.G. sponsorship for the magazine was the most thrilling part of my journey with P.I.N.G. I still remember that meeting with the sponsor company; the nervous yet passionate editor in me couldn’t stop talking about the power and influence of this magazine on young minds.

Quantum Computing at Google

T

here is a lot of excitement in the world of computing these days around the advent of Quantum Computing and how that is expected to bring about a paradigm shift not only in the fields of Information Technology but also scientific research, data security and financial technology. We are at the cusp of major breakthroughs (Quantum Supremacy) that will propel us into a new era of computing power and Google has been at the forefront of this transformation. It has made extensive investments in developing this technology. Below is a brief exploration of some of the major breakthroughs and milestones that have already been reached on this journey.

It’s always difficult to pass on the baton to the next batch. But in my case, it was a bitter-sweet moment. While I’d miss working and running round the clock, almost tearing up after holding the first-printed colored copy of the magazine, I was equally excited to see how P.I.N.G. unfolds with the next batch. My juniors and their enthusiasm assured me that the magazine was taken over by equally (if not more) passionate leaders.

P.I.N.G. has always given students the platform to voice their opinions and have them heard. It is an opportunity that encourages students to be more aware of the technological advances prevalent around them and to urge them to be part of the revolution. The industry expert interviews along with faculty contributions make P.I.N.G. a holistic magazine meant for readers from heterogeneous backgrounds.

As a closing note, I’d like to leave the readers with a thought that is very close to my heart; a thought I’ve come to own through my life experiences. Don’t give up on your dreams, don’t give up on your aspirations, don’t give up on you. And don’t give in to what is expected of you, don’t give in to what others want of you, don’t give in to others. “Don’t give up, Don’t give in.”

Quantum computing, unlike digital computing, relies on qubits instead of Bits. Bits can only take one of two values at any given time, while a qubit harnesses the power of Quantum Mechanics and can be in a superimposition of multiple states at any given time. In order to read a result from a qubit, they have to be entangled with other qubits, thereby collapsing the superimposition. The biggest result of this phenomena is that qubits can solve a certain kind of optimization problems much faster (Orders of Magnitude) than any classical computer. It is projected that a functioning 100 Qubit Quantum Machine could solve certain kinds of problems faster than all classical computers in the world combined.

P.I.N.G., with all it’s professional glory and fame, will always hold a special place in my heart. It gave a shy kid a spotlight from the dark. It gave a scared

Saniya Kaluskar is currently pursuing her MBA at the Indian School of Business and was an Editor for P.I.N.G. 9.1 and 10.0.

Google’s journey in Quantum Computing started with its collaboration with D-Wave Systems in 2013

3

ISSUE 15.1

CREDENZ.IN

SEPTEMBER 2019

CREDENZ.IN

SEPTEMBER 2019

maven

which was one of the first companies in the world to exploit quantum effects in their computation. This was as part of a 3 way partnership between NASA, D-Wave and Google to explore and understand the biggest challenges facing quantum computing and how to achieve quantum supremacy: an event where a quantum computer is able to compute a solution that is not solvable by current classical computing. However, Google is not the only company to have heavily invested in quantum computing, IBM and Microsoft have their own labs that specialise in this area. Since then Google’s progress in this field has grown by leaps and bounds, culminating in its recent launch announcement of the Bristelcone: a new quantum processor. The purpose of this gate-based superconducting system is to provide a testbed for research into system error rates and scalability of the qubit technology, as well as applications in quantum simulation, optimization, and machine learning. In February 2019, Google noted that the amount of classical computation needed to simulate some of the calculations that they were doing on their quantum processors grows at a doubly exponential rate. Doubly exponential growth is far more dramatic. Instead of increasing by powers of 2, quantities grow by powers of powers of 2: 2^2^1,2^2^2,2^2^3,2^2^4. In this kind of a change trajectory, it looks like nothing is changing in the initial stages and boom! all of a sudden you are in a completely new world. The above observation, which has been dubbed Neven’s law (like Moore’s law), after Google Quantum AI Director, Hartmut Neven, shows the accelerating speed at which things are developing in the world of Quantum Computing and should be a signal to the world that we are at the very precipice of big changes `which will completely revolutionize how we process information and solve computation challenges for decades to come. -Astitva Chopra Head of Agency Program Management Google ISSUE 15.1

4


TESS, (Transiting Exoplanet Survey Satellite) has discovered new planets (GJ

Bridging the rift interview

with Mr Suswar Ganu

magazine is my fondest memory of it in college.

With 19 years of experience in driving strategy and delivering projects across the financial services industry, Mr Suswar Ganu’s acumen on Investment Banking Technology is unparalleled. He has held multiple leadership roles across the Finance Technology sphere and is part of Deutsche Bank India’s technology leadership team. An eloquent speaker, Mr Ganu has also delivered a TEDx talk about his belief that the best way to work with Information Technology is to experience it.

Q

Most engineering students are unaware of the scope for technology in the Banking and Finance sector, could you please enlighten our readers on the different aspects of Finance Technology?`

A

I was part of the Editorial team of an IEEE magazine at my institute. I distinctly remember an article that my friend and I had written about the Pentium processor, which back in 19951996 was the rage. The article tried to illustrate how the CPU architecture, the OS, and memory stack interact. IEEE is an organisation that I was associated with in my years at VJTI, and the

Finance in the good old days used to work on a lot of human labour. For example, back in the 90’s a lot of the stock trade happened through exchanges which ran on verbal exchanges in the broker’s pit, where a bunch of brokers used to stand and give quotes. After placing an order, you had to sign a piece of paper and runners would actually run from one bank to another to ensure that the order is settled. Now, technology has completely changed the paradigm in terms of operating in this environment. These pitexchanges don’t happen and all of this is largely done by algorithms and data structures. Now in India, at the leading exchanges, you have trading at high frequency. Since everything is automated, you have microsecond responses. Technologies such as Dark Fiber technology and Direct Market Access have reduced the latency significantly. The other aspect is in completely automated decision making around trading. This is an entirely new chapter called algorithmic trading which is a very interesting part of how technology has completely changed the trading pattern. To give you an example, let’s consider any typical stock in the share market. There are buyers and sellers of this stock which are putting in orders, and there is an algorithm which is observing a pattern in which these stocks trade and it’s doing a technical analysis, looking at data not over a day, not over a month but over 20 years to find a pattern. Now it’s very difficult for human beings to adapt and find all such different patterns. So, the algorithm recognises that these patterns are happening and it can now put derivative orders, spot orders, future orders and thus help you in making profits. We’ve got to a stage where these algorithms fight with each other, because if I’m a broker and I have an algorithm, then the other broker figures out

ISSUE 15.1

CREDENZ.IN

Q

You completed your Bachelor’s in Electrical Engineering from VJTI, and went on to work as a Developer at Tata Consultancy Services, how did the transition from Electrical to IT take place?

A

Right from my schooling days, I had an affiliation towards algorithms. Although I didn’t know what it meant exactly then, I was interested in trying to understand how machines solve problems. In sixth standard, I made a small quiz by writing a program in a language called GW Basic. In 11th and 12th, I completed a certificate course in Computer Science fundamentals which was done by the State Educational Department and it was introduced as a vocational course in my college. That’s when I got my formal introduction into Computer Science and I felt that this is what I actually want to do. I got admission in VJTI for the Electrical Engineering course, and I thought it would give me a good rooting in core engineering. Then in my second and third year when I started learning the concepts of data structures and algorithms again, I realised that was what really interested me. In terms of the transition, it was not that difficult to be honest because I think core engineering syllabus in India teaches you a very broad-based approach to problem solving, so even if you aren’t studying Computer Science in terms of Computer Networks and Operating Systems, you know how to get there. I believe a good grounding in any engineering is what forms the base for Computer Science. After I joined the industry, I dived deep into the core concepts with gusto and a series of courses with NCST helped me sharpen my computer science concepts. 5

SEPTEMBER 2019

357 D) orbiting in the habitable zone of a star named GJ 357. If the atmosphere is dense enough, heat could be captured in the atmosphere and allow liquid water to be present on the surface of the planet.

Mr Suswar Ganu, Director, Deutsche Bank

“What I’ve seen is that, over the years, it’s the core concepts that matter the most.”

Q

P.I.N.G. is published by PICT IEEE Student Branch. Likewise, what kind of IEEE activities were you involved in at VJTI?

A

CREDENZ.IN

SEPTEMBER 2019

how I am making such a lot of margin money, and also buys the algorithm. Now all brokers have the algorithm and thus one algorithm is trying to make the market by checking the delta and trying to create profits. The other algorithm spots that and it tries to interplay it. So, the technology has essentially gone to a point where these algorithms balance each other and the delta between the stock prices has now reduced to a minimum. You now essentially require a higher order algorithm which can beat these algorithms. You’re essentially making money on algorithms which are fighting for that extra cent or rupee of profit and that’s how most of these things work.

Q

As you get more exposure to the management aspect of an organization, is it inevitable that you lose touch of core technology concepts?

A

I firmly believe that there is no such thing as management which is differentiated from engineering, which is differentiated from coding in the technology industry. The developer in me is still the same. I can easily go into code and sit with that code for three days altogether debugging what is going on, understanding the underlying algorithm and solving it. There are some individuals who probably decide that they will do core development coding or core Computer Science for a period of time after which they’ve made a conscious decision to move away from it. Then there are individuals who will make a conscious decision to stay with it. I respect both choices. I am one of those who has made the conscious decision to stay with it. So, if you asked me today, whether it’s python, AI, ML or some of the other cutting-edge technology paradigms that are used today, I do get involved with it quite a lot. At times, I have other things to look at because of which I will trust some other individual to do the detailed design but am I in touch with what is going on? Oh, absolutely because that’s a requirement of the job today. When you are managing a technology organization or you’re managing an engineering organization, I always believe that you want to display those characteristics before you can say that I am somebody who’s ISSUE 15.1

6


Oldest black hole identified through new data from NASA’s Chandra X

Solar metal nanowire mesh provides high electric conductivity and

observatory, which provides evidence for the oldest clocked black hole in our universe. They have traced its origin back to 850 million years after the Big Bang which is earlier than the previously known black hole by 450 million years.

just governing or who’s managing a group of technologists. You have to know what is going on.

Q

You are doing your Masters in Science from Columbia University in technology management. Could you tell us a little more about the field of technology management in itself?

A

Technology management consists of a vast plethora of things. When you’re doing technology ground up, it’s not just about doing core development and engineering. That I would say is probably one fifth of technology management. You need to consider simple things like how would you think about storing the data that is generated using technology? How do you think about architecture? Is it more top-down thinking? Is it more bottom-up thinking? There are always choices in technology management. Do you buy versus build? For example, if I’m building a cache management system, then there are probably a more than a hundred distributed cache management systems which are available offline, online, or open source. There are others which are licensed products. I have multiple choices and every choice has its pros and cons. Have you done things like security checks? Are you protected against hacking? On the website, have you done things like penetration testing? What about ethical hacking? Have you put a firm which actually does ethical hacking for you? Then there is an aspect around technology and law. Then there is the most important question of how are you going to compensate the people? Are you going to have full time employees? Are you going to have contractual employees? Are you going to give it as a service contract to a software vendor? How are you going to think about enterprise architecture? The architecture that you’ve decided now, is it good enough for 3 or 5 years? Have you kept a budget? Money is the other component which you have to continuously consider. If you look at all of this together, that is technology management. I would advocate technology management a lot to somebody who is aspiring to get into a CIO position. 7

SEPTEMBER 2019

Q

As a Director at Deutsche Bank, you are involved in the Technology Graduate and Intern Recruitment program in India, what do you think are the qualities you look for in students who have applied for these programs?

A

What I’ve always seen is that, over the years, it’s the core concepts which matter the most. In graduate recruitment over the last 15-20 odd years, I’ve seen students grasping more and more of these core concepts which is a good sign. At the same time, I will always find some students trying to jump for the shiny star, which is they’ll talk about how they’ve done Android Programming, or native device programming or iOS programming, which is fantastic, but I see a difference between students who understand the core concepts, and have tried to do more versus students who do not understand the core concepts and are only looking for the shiny object, which is the latest trend. Industry though wants students who are very good at that core and so I think the solution there is that academia and industry have to come closer on multiple avenues than just recruitment.

high transmissivity compared to traditional electrodes being used in solar cells. They also have applications as optical Meta-Materials. They have unique optical properties such as negative refractive index, which aren’t exhibited by naturally occurring materials.

We actually got to a point where there was a probability of a packet loss. In that zeal of making the packet size less, we introduced a bug in the header. As soon as the system started at nine o’clock, it stopped within one minute. That’s where I got my first lesson about the complexity of using technology. The brokers were obviously very agitated and one of them came and said to me, “I know you’re trying to reduce that packet size from whatever 424 bytes to 419 bytes and all the good wishes to you for that. But do you realize that there’s 50,000 orders from one of India’s largest tech companies in one of those packets that you just lost, my friend?” That is something that really struck me and has stayed with me since then. For us as technologists to gain that perspective that it’s not just about technology, it’s the application of technology was the biggest lesson for me. I still call that as one of my toughest challenges because after that we were quite clear about the kind of changes that we were making and it was not just about being technically competent.

Director. In that room, who is the technology expert? You.” That person was Dr R.H. Patil, the Founder of the National Stock Exchange of India. After that, I talked with him for about 8-10 times in a duration of 2 years and I remember all those 8 half hours conversations in my life. Those were the best mentor-mentee conversations that ever happened and frankly till the end of the 2 years, I didn’t even realise that what I was getting into was formal mentoring.

Q A

What message would you like to give to the readers of P.I.N.G. 15.1?

I think being inquisitive and applying technology to real world problems is what engineers are very good at, and it is important that we stay true to that paradigm. In light of the engineer’s day on 15th Sept, I would like to resonate what Henry Royce once said “Strive for perfection in everything you do. Take the best that exists and make it better. When it does not exist, design it.”

Q

Q

Most of the times what I’ve seen is we- as technologists- lose sight of the fact that at the end of the day, success means success and nothing else. Until the outcome is reached, how much ever we celebrate processes that get to the outcome, it’s the outcome that matters. There was a specific problem that I was doing in which I had to order as less data consuming packets as possible in a multi-order exchange system that I had developed. The efficiency of the entire system was based on how less the size of the packet was. I remember that the packet size that I was looking at was something like 424 or 430 bytes and the idea was to compress it more and more. We were all trying to compress the size of the data packet as much as possible and I think we had lost sight of understanding what the packet was all about.

A

In the early part of my career, I worked for a couple of years with RBI and its sister organisation CCIL. We were doing some great work but it was always difficult to communicate plans and at times be assertive. I desperately failed in one of the Board of Directors meetings, where I was asked a technical question and I did a bad job. Interestingly, the Chairman of the organisation, who is unfortunately no longer, called me to his cabin. I was extremely nervous. He offered me a cup of tea and asked me to talk about myself for about 10-20 minutes, and after that said “You seem to be a smart chap. Why did you not talk confidently in the meeting? Why were you not assertive to put across your point of view? Forget about the fact that I am a Chairman and I have 20-25 years of experience or I am a Managing

A

We thank Mr Suswar Ganu for his valuable time and his contribution to P.I.N.G. -The Editorial Board

ISSUE 15.1

CREDENZ.IN

ISSUE 15.1

Throughout your career, you have worked on several interesting projects. Which project would you consider the most challenging?

CREDENZ.IN

You have mentored many international technology teams and individuals in the course of your career. Who do you believe has been your mentor?

SEPTEMBER 2019

8


Qutrits editorial

teleporting into the future

The history of the universe is, in effect, a huge and ongoing Quantum Computation. The universe is a Quantum Computer” -Seth Lloyd How true these words are. The expanse, complexity and intricate design of the universe are the perfect metaphor for quantum computation which for years has been the centre of attention of the scientific community.

On August 15th, 2019 Austrian and Chinese scientists successfully teleported three-dimensional quantum states called qutrits, thus experimentally demonstrating that which was only considered a theoretical possibility till now. Qubits (two-dimensional quantum states which store information) have been a focal point in the field of quantum computation for nearly a decade. However, this recent advancement has resulted in a paradigm shift from qubits to qutrits, causing ripples in the pool of knowledge that is quantum physics.

the many ways in which qutrits are better. • Qutrits can represent a combination of 0,1,2, while qubits could only represent a combination of 0,1. • Qutrits can carry larger amounts of information than qubits due to the greater number of states. • Communication with qutrits results in lesser overheads and fewer noise-generated errors. • Qutrits are more robust to decoherence under certain environmental interactions. Unlike the teleportation shown in famous science-fiction movies, quantum teleportation is the transmission of the properties of a quantum particle to another distant particle, rather than the transmission of the particle itself. For example, consider two entangled electrons and their respective spins. By quantum teleportation, whatever happens to one’s spin influences the other’s spin too. This property is known as quantum entanglement. The prerequisites for the quantum teleportation were a qutrit that was to be teleported, a conventional communication channel capable of transmitting the states, and means of generating an entangled EPR pair of qutrits. The team of scientists which performed the quantum teleportation of the qutrits teleported the properties of a photon that had three possible states. The particles could have taken three different paths- or all three at once. This three-path system was created using optical systems of lasers, beam splitters and barium borate crystals. According to Chao Yang Lu, one of the physicists involved, this arrangement is very similar to Young’s Double Slit experiment. In this experiment, two slits are made in order to create an interference pattern. The two slits represent states 0 and 1 since a photon goes through both. Hence a qubit is formed. To this they added another slit, thus adding another state and creating a qutrit.

such an entanglement is all the more difficult. In order to confirm entanglement of qutrits, the scientists had to test their Bell State, which is the condition of maximum particle entanglement. It is important to analyse which Bell State the particle is in so as to ensure that the information is conveyed with high fidelity. The teams succeeded in their Bell state measurement which was taken over 10 states and resulted in a fidelity of 0.75. If the system teleports perfectly, it has a fidelity of 1. Hence it can be said that the current system is not fully efficient. However, according to the scientists, even if the setup continues being incompetent, it has been instrumental in depicting qutrit teleportation. They now aim to improve their teleportation method since it has been critiqued to be slow and inefficient, and also plan on scaling their study to four level systems called ququarts. Applications of Qutrit Teleportation:

Although qutrits and qubits have one common property; the ability to exist in multiple states at once, the essential difference between them lies in

Creating a qutrit was only the first step towards a larger goal: teleportation of the qutrit. In order to teleport a qutrit, it is necessary to entangle them. However, since light rarely interacts with itself,

1. Advancing towards Quantum Internet As scientists prepare for a future quantum internet, the experimental demonstration of qutrit teleportation is a big step forward as it implies that large amounts of information can be transferred securely. According to Ciarán Lee, from University College London “The higher the dimensions of your quantum system, the more secure you can ensure your communication is and the more information you can encode.” Researchers have said that such a teleportation would not be restricted to three dimensions and may be applied to multiple dimensions in the future. Hence this demonstration has profound ramifications in the development of the quantum internet. The current accuracy of qutrit teleportation is 75% but it is bound to improve in the coming years.

9

ISSUE 15.1

CREDENZ.IN

This international first was demonstrated by researchers from the Austrian Academy of Sciences and The University of Science and Technology of China and it has paved the way for next generation quantum network systems.

SEPTEMBER 2019

CREDENZ.IN

SEPTEMBER 2019

2. Advancing towards Quantum Cryptography The primary aim of quantum cryptography is the transfer of information without tampering or eavesdropping. Due to qutrit entanglement, any discrepancy can immediately be detected and rectified. Interfering with qutrits causes them to lose their delicate quantum state, thus leaving an evident trace of hacking. Hence, the information is protected by the Fundamental Laws of Physics 3. Improvement in Quantum Communication Since qutrits carry larger amounts of information, there will be an increase in channel capacity which will result in improved quantum communication. Stefanie Barz, a quantum physicist at the University of Stuttgart opines that the biggest advantage of quantum networks is that they can be built step by step and different functionalities can be added at each step. With each passing day, we inch one step closer to our longstanding goal of developing quantum computers. One simply cannot deny the wide spectrum of possibilities and opportunities this will offer albeit in the future. It is predicted that although quantum computing applications won’t replace classical networks, they will definitely complement them. This is because quantum computing has certain types of problems which it is good at solving and some which it isn’t. It is predicted to have immense applications in the fields of Artificial Intelligence, Molecular Modelling, Financial Modelling, Weather Forecasting, Biomedical Simulations and Particle Physics. Hence, the qutrit teleportation marks a significant chapter in the history of quantum computation and is a remarkable feat in the huge ongoing computation that is our universe.

-The Editorial Board ISSUE 15.1

10


CO2Tree is a concept by Bio-Urban, a startup based in Mexico City that has

Decoding the techie colloquium

with Mr Aditya Phatak

areas which Persistent was interested in and one of the vertical areas Persistent was interested in. We sponsored PhD students to get some innovation out which could possibly help the company. We also did workshops where I invited many of my clients from Agilent Technologies and faculty members from Ayu School of Informatics to discuss and innovate probable solutions. It always happens that when the academia and the industry sector come together, a lot of ideas come forth. If you don’t actively work with academia, people are just going to work on their day-to-day projects. It was an investment for Persistent to figure out if we can bring some innovation out of this initiative.

With over 15 years of business, management and technology experience in Life Sciences, Healthcare IT and Software Development, Mr. Aditya Phatak is the face behind exploring how to bring next generation genomic technologies to India so as to help enable personalized and cost-effective patient care. He started out as an Electronics and Telecommunications Engineer from Walchand Institute of Technology. Serendipity brought him into the field of Life Sciences after which there was no turning back. He is passionate about developing information management systems & tools to help biomedical researchers effectively manage and analyse large volumes of data. Aside from his illustrious professional career, he has a keen interest in the outdoors particularly cycling, running and swimming.

Q A

Q A

Are such collaborations possible in India as well?

How did your journey in the field of Computer Science begin?

After my Bachelor’s, I got a job at Persistent for two years. When Mr Anand Deshpande had started Persistent back then, it was really just a database company and my first project there when I started in 1995, was to make a linker optimizer. Microsoft had outsourced some of this work to Persistent, and it was a very prestigious project. I was fresh out of college, and I got this great chance where I did multithreading of Pass 1 and Pass 2 in the compiler. We got around 22% gains and as a testcase Microsoft had given us the obj files of MS Paint. My second project at Persistent was essentially implementing SQL operations, by designing algorithms. It was a lot of Data Structures and Algorithms because we had to essentially build a data warehouse and design this platform.

installed artificial trees which can suck up as much air pollution as 368 real trees. These artificial trees contain micro-algae that clean up the carbon dioxide present in the air and any other contaminate. Each tree costs about $50,000 and weighs about a ton.

Mr Aditya Phatak, Managing Director (India operations), PierianDx

“Programming is like a sport. You can’t learn it by just watching, you have to write, practice and learn.”

That is a very good question because we have been looking at some R&D partnerships. Our domain over here is different. In case of our company, we have good informatics and scientific skills, what we lack is the medical skills. We have done some work together but there is no formal collaboration as of now. In India, people are realizing that you can’t get all the skills in your own square. There is always something that is core to you, and something that is context to you. So, for Tata Memorial Hospital, core to them is essentially patient services and medicine while informatics and genetic data analysis is core to us. Our company’s goal is to become the most trusted source of knowledge for cancer genomics. So, a company like us will definitely have value to a place like Tata Memorial Hospital for their cancer therapy. If there’s a win-win situation, these collaborations will definitely happen.

the genomic space and so I started inDNA in 2013 named so because it has both “INDIA” and “DNA”. I never wanted to set up a wet lab, it was always going to be a dry lab. I wrote a grant and was one of the early recipients of the grant called “B.I.G.: Biotechnology Ignition Grant” provided by the Government of India’s Department of Biotechnology. The B.I.G. grant was for developing an idea to a proof of concept. It was a 50-lakh rupee grant for 18 months. I started the company at NCL, a premium biotech incubator. I envisioned that in India some of the wet labs will eventually do sequencing on the illumin panel. Once you do the sequencing for a patient, you will get data from 48 important key genes. I made a pipeline to convert this raw data to clinical data. This was a simple proof of concept, we got some successes, but at that time sequencing was not very popular, it was very expensive and the drugs were available. After the grant finished, we had very little funding. I went to an ACMG conference in the US and coincidentally met an old colleague, Rakesh. He had recently spun out a company called PierienDx and they had much better technology, much better funding, and Rakesh suggested that I join them and run the India Center. That’s exactly what we did. I moved all my earlier members here and we started this Center. That is how from inDNA we slowly transitioned to PierianDx India.

Q

What are some pointers one must keep in mind before founding a start-up so as to ensure its success?

Q

There were two reasons. One was that Persistent Founder Mr Anand Deshpande graduated from Bloomington, and he wanted to give

back something to the community. The other was that at that time, a lot of people were questioning these Indian IT companies, saying that they’re taking all these jobs out and not really creating any value other than the final product. The other aspect is that Persistent had also evolved from being an offshore partner to an Informatics Partner. I was then part of its technology incubation group. Indiana University offered us a great opportunity to collaborate with Ayu School of Informatics and Computing. We decided that we want to sponsor research in two areas, one is Life Sciences and the other is Cloud Computing thus taking one of the horizontal

When I moved back to India in 2012, I took a sabbatical and decided that I wanted to start my own company. I was very excited about

First of all, you must have passion in that area. You must not do it only because you want to earn money. Money should be a by-product. It can’t be your starting point for developing a product. Secondly, it should solve a real problem. There is a saying by Clay Christianson “How do you know if your idea is correct or not? A simple test is asking “Does it get the job done?” You have to really think about that pipeline, you can’t be partial. And finally, the ability to take some risk. I would say the younger you are, the more risks you can take. The older you become, your ability really goes down

11

ISSUE 15.1

CREDENZ.IN

ISSUE 15.1

Q

During your tenure as the Vice President of Life Sciences and Health Care at Persistent Systems, you were also responsible for setting up an R&D Center in Bloomington, Indiana. What was the purpose of setting up this Center?

A

SEPTEMBER 2019

CREDENZ.IN

You are the Founder of inDNA Research Labs, an early stage big data analytics healthcare startup. What is the story behind conceptualizing and founding this unique lab?

A

SEPTEMBER 2019

A

12


Battery free cellphone uses energy from other sources like radio

waves and sunlight. Radio waves can generate a small amount of energy that can power up the devices. The team is working on a base station that allows the user to use the phone from a distance of 15 meters.

because you have other obligations and that energy might not be there. While you are younger, if you know that you have that entrepreneurial knack, definitely give it a shot. That being said, not everyone has to become an entrepreneur. However, if you are a technologist who has that knack, then I would recommend looking for a partner. And this is where your colleges come into play. You have that trust and you’ve spent those key years together with your batchmates. Right when you’re in college, you can think about who your business partner will be. When you have a start-up, it’s really your conviction and your vision which will attract like-minded people who also understand that it is a risk and are willing to take it. If the company is on the right path, the key employees will always stay, because they are also growing and getting an experience that they won’t get even in a mature company.

this field? What was the environment like growing up?

A

The environment growing up was really nice, I never had any pressure to even become an engineer. My father is a very reputed faculty member and I learned a lot by just watching him. His mantra for not just me, but all his students is to “Think Big, Work Hard and Enjoy Life”. He is 70+ and he stills works 8-10 hours every day. I don’t remember him ever asking me what my score was. Since I grew up in an IIT campus, I learned programming myself and I had access to a computer at my home from a very young age. In my 9th grade, I learned basics and I would write games. I naturally grew a liking for computers and so I decided to do computer engineering.

Email and Security

E

mail plays an important role in our digitally controlled lives. Today, every small and large profit and non-profit organization across the globe uses email for communication. In 2015, a study conducted by Radicati group confirmed that an average user sends and receives at least 122 emails per day! This is a big volume of email for any user to process. This popularity has also made it a vector of choice for various online threats. From an internal office messaging system to gaining validity as a legal business document to transforming into a major form of cyber threat, email has come a long way. Evolution of Email Through The Years

Your father Dr D.B. Phatak is one of India’s most renowned computer scientists. What role did your father play in your decision to enter

We thank Mr Aditya Phatak for his valuable time and his contribution to P.I.N.G. -The Editorial Board

1971: The first network email “QWERTYUIOP” was sent by Raymond Tomlinson, who worked for the Advanced Research Projects Agency (ARPA). Raymond used “@” symbol to connect the computer to the mailbox. 1977: People started sending mail to other networks using the ARPANET. 1985: Defense and government employees, academic professionals, and students in the US started using email for information exchange. 1991: Berners-Lee created the World Wide Web, which improved Internet access, which became a key factor in popularizing the use of emails for communication. 1996: The first webmail service - Hotmail was launched by Jack Smith and Sabeer Bhatia. Hotmail allowed everyone to start their personal email accounts and access them from anywhere in the world. This was a beginning of a new era, which signaled the end of the ISP-based email. 1996: First email-borne threat - Spam was reported. Spam carried malware, which tried to extract confidential information from users. 1998: The word “Spam” was added to the Oxford English Dictionary, as it became a reference to unsolicited, undesired, or illegal email messages. 2003: BlackBerry smartphones RIM 850 and 857 was launched, which started the era of mobile emailing.

13

ISSUE 15.1

CREDENZ.IN

Q

How is developing information management systems and tools for the purpose of biomedical research different from developing the same for a corporate/business requirement?

A

Each information management system has its own workflow. I won’t say they’re very different, but with the domain there are certain nitty-gritties you need to be aware of. Since we work on clinical and patient data, our biggest priority is Privacy and Security. However, if it was some other project in a corporate requirement, my biggest priority would be Cost. In our matrix, privacy, security and complying to guidelines comes first. Then comes stability of the code, because our customers are not IT savvy. They’re medical doctors. They don’t like a whole lot of changes every now and then. So, making products seem seamless for the user is our priority. Then comes cost. I’m not saying cost isn’t important, but with the amount of litigations that are possible, privacy and security really becomes number one.

Q

SEPTEMBER 2019

Q A

Is there any message you would like to give to the readers of P.I.N.G. 15.1?

This is the time when you should explore. Till around 25-26, keep exploring. These are going to be your formative years. Make sure the horizontal skill set is strong. Focus on creating value, and don’t worry about grades. Solve real problems and learn. Trying to solve a real problem, even if it’s just a proof of concept is important. Don’t worry too much about marks and focus on developing skills. Attend networking events and do internships. My recommendation is, harness your computer skills and write good code. Programming is like a sport. You can’t learn it by just watching, you have to write, practice and learn. Ask questions, get experience. Build your network, and explore. Create your own brand and start from your student days.

CREDENZ.IN

maven

fortifying business systems

SEPTEMBER 2019

2004: Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003 came into action in the US. This law was signed by President George W.Bush. This act was widely criticized due to its lack of addressing spams. 2008: President Barrack Obama became the first US president to exploit the benefits of mobile email. 2012: A study confirms that there are more than 3 billion active email accounts in the world. Every day approximately 294 billion emails are sent, of which 78% are spam! 2016: The Radicati group found that every day approximately 100 billion corporate emails are exchanged. 2017: Ransomware, business email compromise, and spear phishing emerge as top email-borne threats. 2018: Despite the rise of social messaging apps, 78% of teenagers use email. 2019: A majority (62.86%) of business professionals prefer email to communicate for business purposes. Landscape of Email Threats • Today, the email attacks have become complex, organized, and targeted, as well as pervasive. • As per the SANS Institute, Ransomware emerged as the most identified form of cyber attack across the globe. Ransomware is a type of malware, which gains access to the computer by encrypting data. The computer access is blocked for an organization until a huge sum of money is paid. • A study by Osterman Research Confirmed that nearly half of the North American business organizations were affected by Ransomware in 2017. About 60% of the Ransomware attack was conducted through email. • As per the 2016 Threat Landscape Survey by SANS, whaling and spear phishing are emerging as major forms of cyber attack. Spear phishing attacks are basically socially targeted campaigns, where attackers access social media profiles of unsuspecting employees. They gather maximum information from those profiles, before launching ISSUE 15.1

14


Ancient galaxies have been discovered using telescopes specifically de-

Telescopic Contact Lenses

that can zoom in objects. New contact lenses have been designed to zoom using a double blink which can be active even when the user’s eyes are closed. It uses electro-oculographic signal, which expands its polymer for this purpose. Controlled by 5 electrodes, it acts like muscles that zoom in and out on naturally generated electrical signals.

signed for detecting specific wavelengths. These were previously undetectable to even the Hubble Telescope because of the transition of visible light to infrared wavelengths due to the stretching of light. They have been intimately connected with supermassive black holes and distribution of dark matter.

an email attack. • Business Email Compromise (BEC) is another form of cyber threat, which continues to attain strength. In this type of attack, an email imitates a corporate identity like a company CEO, or a trusted sender, asking for a wire transfer. In 2016, businesses affected by BEC lost $140,000 per attack. The Federal Bureau of Investigation (FBI) stated there have been over $5 billion losses between 2013 to December 2016, due to BEC. It is not only big corporations, even small firms have reported BEC scams. • Although data theft is one of the underrated forms of email threat, still a report by Osterman Research in 2017 suggested that 69% of businesses were affected by it. They reported significant damages due to data loss brought by actions departing employees, who may delete important email data or ingest malicious programs on the server. The research report also stated that almost three in five businesses haven’t anticipated this theft.

cyber resilience strategy that addresses risks associated with email security, email availability, as well as email archiving. Organization-wide Value of Email Archiving

Cloud-based email archiving is one such robust

Migrating to a robust cloud archiving makes life easy for the entire organization in the following ways: • Ensures Resilience with Strong Security Features: Many email archiving solutions are equipped with end-to-end encryption, as well as several other features that help protect sensitive business data from corruption, damage, or misuse. Also, there is no mixing of business data, because it is stored in separate silos to prevent corruption. Thus, when a cyber attack or a natural disaster occurs, the cloud-based archiving solution makes it easy to retrieve a business information, and maintain the business continuity. • High Productivity for Users: Preserving business records in their authentic form is one of the key requirements in e-Discovery and compliance regulations. Compliance is always analyzed in three ways - security, permanence, and auditability of data. However, the consequences of slow e-Discovery can be massive, including huge fines, a risk of sanctions, and reputation loss. Thus, Legal and Risk Management teams in various organizations spend a lot of time in organizing email and data archives as per the compliance regulations. Modern cloud-based email archiving solutions have faster search capabilities, which easily streamline processes required for compliance and e-Discovery. It means the Legal and Risk Management teams can easily save on the time spent in document searches, and utilize it for more productive tasks. • Easy Integration with Various Legacy Business Tools: Email archiving systems can be easily integrated with various business legacy tools like Splunk, Salesforce, etc. This helps reduce the business complexity. • Lesser Involvement of IT Teams: Advanced email archiving solutions are intuitive, and enable authorized people to safely and easily access their documents. A user can launch an independent

15

ISSUE 15.1

Thus, threat defense and data security have become a topmost priority for business organizations. As email-borne threats continue to grow, many business organizations are investing in robust cyber defense strategies like email archiving to protect their data, employees, and business. Achieving Cyber Resilience Through Email Archiving Cyber resilience is gaining recognition due to its multidimensional approach towards the cybersecurity. It combines various concepts like business continuity, data security, and business resilience. In simple terms, it is an acknowledgment that cyber attacks on the company’s email systems will continue, and sometimes threat agents may succeed, too. A robust resilience strategy focuses not only on combating unsuspecting cyber attack but also assures a fast recovery and business continuity after a threat is negated.

SEPTEMBER 2019

CREDENZ.IN

search, without seeking the help of the IT team. So, business organizations can now employ their IT teams in productive tasks, which make a big impact on their business. • Enables Faster Decision Making: A robust email archiving solution provides an excellent business opportunity to organizations. It enables them to analyze the archived data for gaining insights on trends and patterns. These insights help in faster decision making, as well as improve the business performance. Due to the above-mentioned reasons, 33% of the decision-makers in IT organizations say that email archiving solutions will become the major business driver in the next two years. This was confirmed during a study by Osterman Research in 2017. Exhibits ClrStream (Cyber resilience through mail with business continuity) is a cloud based mail cleaning, mail security and business continuity solution, which scans every inbound email for malware, ransomware, virus and spam, quarantines infected email and only passes clean mail to the users. This reduces chances of email borne infections to near zero. In addition, by retaining a copy of every inbound and outbound mail on the cloud for a limited period, the service provides alternate access to the users in case of a disaster on their primary mail server.

ingested into Vaultastic, it is safe for life and always available online, instantly, via e-Discovery. The solution is designed to ingest email in-line along the path of delivery and not post delivery, since there is very little control once the mail is in a user’s mailbox. This ensures that no matter what the user does after the mail has been delivered, there already is a copy of all the email in a central separate physical repository. Email is already the most popular system of business communications with an estimated 60% of business- critical data getting captured in email boxes. With businesses becoming more digital, email is gaining even more importance as a destination for authentication, notification, authorisation, besides communication. The resulting trail of data generated through these digital exchanges is becoming more valuable and necessary to secure and organise for easy access and leverage for productivity gains and innovation. Email as a business communication tool continues to evolve to address productivity and security concerns as new patterns of usage and security threats emerge.

Vaultastic (Cyber resilience through robust and safe Email Archiving) is a cloud-based email archiving and e-discovery solution, which helps businesses to achieve cyber resilience, by providing them a way to safe keep a copy of every mail transacted by all users, in a secure, separate location and make these vaults available to the users via a tamper proof, secure web interface. The mails are kept safe, secure, search ready and always online using technologies such as encryption, role based access, location based access control, multi factor authentication, audit trails, elastic stores, and more. This means that once an email is CREDENZ.IN

SEPTEMBER 2019

-Sunil Uttamchandani Co-Founder and Chief Solution Architect Mithi Software ISSUE 15.1

16


Two atoms thick gold: Scientists at Leed’s Molecular and Nanoscale

Anti-HIV Nanobots pansophy

the miniscule healers

T

here are millions of people in the world suffering from serious diseases like AIDS (Acquired Immune Deficiency Syndrome) and cancer. We use different therapies like radiation therapy, chemotherapy, biotherapy, cocktail therapy, etc to cure them. These treatments cannot cure the patients completely as they destroy the affected cells. Their effect lasts for only a few months or a few years. However, these conventional treatments have many side effects. Using advancements in robotics, affected cells can be destroyed and, most importantly, be cured. We can use this method to increase the lifespan of patients with minimal side effects.

Physics Group have made these gold nano sea-weeds possible by creating a lattice of surface atoms which could be used to form bendable screens, electronic inks and transparent conducting displays. These gold sheets could also act as highly effective artificial enzymes.

With the advent of new classes of drugs known as protease inhibitors and the invention of triple-drug therapy which launched Highly Active Antiretroviral Therapy(HAART), where three or more combinations of drugs are delivered simultaneously. However, this process has tremendous side effects and produces resistance to some classes of drugs. Researchers have discovered an advanced method by using nanotechnology like Gene Therapy. This method involves a nanorobot for vivo operations to get rid of HIV-AIDS and cancer. Nanorobots are nanodevices which can be used for protecting the human body against a pathogen.

The initial cost for development is high but once they are produced in batches, the cost is expected to go down. Due to their smaller sizes, Nanorobots can travel smoothly through the body. Diamond structured carbon Nanorobots are “designed for blood operation”. Due to the inert properties and strength, the smooth surface of carbon atoms can easily trigger the human body immune system and perform tasks which are impeded in conventional therapy. Nanorobots reach all the corners of the body which are able to receive power and be reprogrammed using external sound signal network. We could establish a special network of stationary nanorobots inside the body to collect the data as it passes the affected area and reports the results. In this way, doctors can monitor the progress of the patients and provide further instructions if required. Nanorobots are flushed out of the body after completion of the operation.

Using these advancements, scientists have invented different nanotechnologies in healthcare. We can use these technologies to dispose and fight HIV (Human Immunodeficiency Virus) which causes AIDS. They have developed a new drug delivery vehicle similar to cocktail therapy using biodegradable polymeric nanoparticles.

Current drugs used to treat these types of diseases will increase the lifetime of the affected person only for a few years. To provide the treatment with greater accuracy, we can take the help of nanodevices, which uses Nanosensors to detect the AIDS-affected WBCs and maintain the constant level of WBCs in the bloodstream.

These can encapsulate non-nucleoside reverse transcriptase fusion inhibitor which improves antiviral activity and prolonged blood circulation time. To cure HIV and AIDS in the early stages, we concentrated on antiretroviral drugs. However, these are effective upto a certain stage only.

Nanorobots can perform tasks at the atomic, molecular and cellular levels. It operates on a specific site in the body. Since the Nanorobots do not generate any damaging activities it will not produce any side-effects. This makes them more effective compared to traditional treatments.

17

ISSUE 15.1

SEPTEMBER 2019

CREDENZ.IN

Nanorobotic phagocytes known as microbivores can patrol the human body bloodstream and search for unwanted pathogens including virus, bacteria and fungi to kill or digest them. Nanorobots can destroy all the pathogens present in the blood in just 30 seconds. This is much faster compared to the conventional process. Microbars will clean all the blood present in the body. For the most severe septicemic infections, it takes only minutes to a few hours compared to antibiotic-assisted natural phagocytic defences where it takes weeks to a few months to clean the blood.

intrusions from infectious agents attack the person’s health. As a result, the subsequent acquisition of diseases like pneumonia take place in the patient’s body. The immune system of a human body is controlled by B-cell and T-cell present in vertebrates. B-cells are mainly responsible for the production of antibodies. That means proteins that can bind to particular molecular shapes in the body can fight many infections. T-cells are of two types. Some cells directly destroy foreign intrusion cells except bacteria while others help the B-cell to develop antibodies in the body that can fight against infections. Recently developed sophisticated nanorobots can penetrate or examine at the cellular level, performing surgery within the cells. A physician can guide the nanorobots to extract the chromosomes from the diseased cell and introduce newly manufactured one in that place. This process is known as chromosome replacement therapy. So, by this method we can cure cells completely of any pre-existing disease like genetic diseases and cancer cells i.e. an increase in their lifetime which is a boon for humanity.

In the Nanorobot method, there is no risk of sepsis or septic shock. We can program Nanorobots to detect and destroy cancer cells or clear circulatory obstructions within a few minutes to rescue the patient from ischemic heart disease. If a person suffers from AIDS, their body’s specific defence system cannot fight against all infection agents. Due to this, there is a specific loss of time of immune cell function. Therefore several CREDENZ.IN

SEPTEMBER 2019

-Dr K. C. Nandi Pune Institute of Computer Technology ISSUE 15.1

18


Egyptian secret message

Data Compression pansophy

condense. analyse. predict

D

ifferent types of data need a physical drive to store them. There has been an explosion in the volume of images, videos and similar kinds of data circulated over the internet. Users expect intelligible data on the internet, even under the pressure of multiple resource constraints like bandwidth bottlenecks and noisy channels which become a fundamental problem in wider engineering communities.

The main aim of data compression is to minimize the number of bits required to code on information or data, reducing the hardware required to transfer or store the given data. Graceful degradation is a quality of service term used to capture the idea that as bandwidth drops or transmission error occurs, the user experience deteriorates but continues to be meaningful. Traditional data compression algorithms use handcrafted encoder-decoder pairs also known as “Codecs”. The main problem with its use being adaptability and the uncertainty on whether the data is compressed or degraded gracefully. Compression is necessary for real-time complex applications such as geophysics, telemetry, non-destructive evaluation and medical imaging which requires exact recovery of original images. Human efforts are spent on analysing the statistics of new data formats in order to design efficient data compression algorithms. This calls for the need of new data compression algorithms which will increase flexibility while demonstrating 19

SEPTEMBER 2019

A team of German scientists have developed a method of “virtually unrolling” old Egyptian scrolls. Unfolding the scrolls could cause considerable damage to the writings due to the age of the scroll. The photons present in the ink on the papyrus are excited by shining a high-energy X-Ray beam on the papyrus giving an image of the characters.

improvement on traditional measures of compression quality. There has been a significant advancement in the problem of data compression in the past 50 years. The two main categories of data compression are ‘Lossless compression’ and ‘Lossy compression’. In lossless compression, the picture quality remains the same and the original size and quality of the image or video after decompression is maintained. Lossy compression looks for redundant pixel information only to discard them permanently. This method is not used for compressing text documents or software but widely used for media elements like audio, video or images. Lossy compression algorithms take advantage of inherent limitations of the human eye and discards information that can’t be seen. JPEG, JPEG2000, MPEG, MP3 are the formats that use lossy compression to encode data. On the flipside, lossless compression temporarily discards the file data in order to transfer the file over the internet. It can be applied to graphics as well as computer data like spreadsheets, text documents or software. Portable Network Graphics (PNG), windows tool, WINZIP, gzip use lossless compression. The process of compression is complex and completed in two steps called Decorrelation and Entropy Coding. Decorrelation removes the inter-pixel redundancy by decorrelation techniques such as run-length coding, predictive techniques, transform techniques or SCAN language based methodology. The second step - Entropy Coding removes the coding redundancy. Entropy is the average number of bits required to represent a symbol. For frequently used symbols, fewer bits (value less than entropy) are assigned and more bits (value more than entropy) are assigned to rarely used symbols. This leads to the formation of Variable Length Codes (VLCs). There are multiple VLCs proposed, such as unary codes, binary codes, gamma codes, omega codes etc. The most popular ones are ‘Huffman Codes’ and ‘Arithmetic Codes’. ISSUE 15.1

CREDENZ.IN

The concept of tokenization is used in new compression algorithms called LZ77 and LZ78 (inspired by names of researchers Lempel and Ziv in the years 1977 and 1978). Multiple variants of these algorithms were proposed and are used in JPEG, and MPEG.

(GAN) in their approaches. The approach using DNN makes the process of entropy coding more efficient by saving entropy values of the previous code. Supervised deep learning and GAN make the correlation process more efficient.

The compression algorithm called Burrows-Wheeler Transform makes a cluster of similar symbols and it is compressed. This method is currently being used in the Linux operating system and many network protocols in the TCP/IP stack, following which dynamic statistical encoding is used, which is adaptable to the input given to the data compression algorithm. The kind of input will decide the entropy value which may be different for multimedia data and textual data.

From the start of 2019, internet users around the globe have accessed an average of twenty-five gigabytes of data per month. As we progress towards an era of IoT and smart technologies, enormous amounts of data is going to be generated and accessed by the world at large. Reports suggest that more than five billion people do not have access to the internet. It is speculated that the majority of the world population will have access to the internet in the coming years.

Real-time applications such as making predictions based on data or understanding the underlying processes are important to identify the intelligent model which governs the data. For example, in the television broadcast of a tennis match, it is more important to preserve the quality of players and the court than the distinguishing features of the crowd.

This will require us to scale the network infrastructure to meet the rising needs. This opens up a new challenge to develop efficient and secure data compression algorithms that do not compromise on quality of the data being transferred and to use them wisely.

Information theory suggests that good predictors form good compressors. For video calls, it is required that the people are to be seen clearly than other objects in the frame. There are countless machine learning algorithms that perform functions such as regression, classification, clustering, decision trees, extrapolation, and more. The goal of machine learning is to train the algorithm to extract the information from the underlying data in order to perform a data-dependent task. While designing such algorithms, various machine learning approaches like Supervised Learning, Unsupervised Learning, and Reinforcement Learning etc., can be used. Neural networks based techniques have been used in the past on data compression. Currently, varieties of machine learning approaches are applied in data compression techniques and are being tested for getting better results in lossy, as well as lossless compression. Multiple approaches have used Deep Neural Networks (DNN), Recurrent Neural Networks (RNN), and Generative Adversarial Networks CREDENZ.IN

SEPTEMBER 2019

-Ranjit Bidwe Pune Institute of Computer Technology ISSUE 15.1

20


CRISPR,

Soft Robotics featured

beyond metal robots

R

obots were conceptualized by scientists back in the days of World Wars and have since been used in every sector imaginable. They are designed to carry out tasks that are usually beyond the physical capabilities of living beings. The term Bio-Mimicry and Robotics go hand in hand. For the replication of living functionalities, a Robot must be able to feel or perceive its environment and act accordingly. Therefore the building blocks of a Robot are, a control system to provide the thinking capability, sensors to perceive and actuators to act, a feedback mechanism for perfecting the intended action and a memory device to store its recordings.

In Conventional Robotics all of these blocks are largely built using metals. While metal may be modelled to mimic living organisms, it cannot achieve an absolute resemblance to a living being due to its rigidity. A better way to mimic the physical capabilities of living organisms would be by constructing these fundamental blocks using relatively more adaptable and flexible materials. Soft Robotics is a subset of Robotics which deals with robotic systems having deformable bodies made out of highly compliant materials, similar to those of tissues and muscles. Soft robots differ from industrial robots on the basis that they are manufactured using soft and malleable materials which extend locomotive and adaptive abilities to the system. 21

SEPTEMBER 2019

a gene-editing tool, was used to develop a hydrogel with properties similar to that of smart materials, which can change their shape on command. The hydrogel is used to release enzymes, drugs, and even human cells. It is used to target and release drugs around tumours or antibodies around an infection.

The constructional and design based difference between Conventional or Hard Robots and Soft Robots is in rigidity and flexibility. According to a recent study, Hard Robots are made of rigid materials with a bulk elastic modulus as low as 1 GPa. The monolithic Soft Robots should be fabricated from soft and hard materials or from a strategic combination of them, with a maximum elasticity modulus of 1 GPa. The main difference centres on seamlessly combining the actuation, sensing, motion transmission and conversion mechanism elements, electronics and power source into a continuum body that ideally holds the properties of morphological computation and programmable compliance (i.e. softness). True to its definition, Soft Robotics uses materials which are bio-imitable and compliant. Soft smart materials with programmable mechanical, electrical and rheological properties, and conformable to additive manufacturing based on 3D printing are essential to realising soft robots. The selection of the actuation concept and its power source, which is the first important step in establishing a robot, determines the size, weight, performance of the soft robot, the type of sensors and their location, control algorithm, power requirement and the associated flexible and stretchable electronics. The requirement of such a soft system is essential in areas where human-robot interaction is frequent. In Prosthetic equipment, soft robotics technology is used for grippers at the end of a prosthetic arm for more delicate and accurate grasping of objects. For invasive surgeries, Soft robots can be made to assist surgeries due to their shape-changing properties. Use of biodegradable and edible robots can be used to deliver drugs inside the human body. Other than frequent areas of human-robotic interaction, this technology can be applied to areas where tactile perception must be accurate to measure land erosion on shores which is important for construction work.

Actuation methods are one of the more popular ones to actuate softly. This method has a memory-based approach, wherein the actuation system remembers the original shape of the robot before it adapts. To make its resemblance more living organism-like Pressure Difference method of actuation is also employed. This is the same as a hydraulic or pneumatic system. Except for the tube where contraction and extension are applied is also flexible in this case unlike the metal valves used in Conventional Robotics.

A unique actuation method which seems like the inverse application of Piezoelectricity is Dielectric elastomer actuators. These actuators work with electroactive polymers which deform when an electric field is induced. A common design of types of actuators consists of trapping a soft insulating elastomer membrane between two compliant electrodes. When a voltage is applied between the electrodes, the arising electric field induces a slimming in thickness and an enlargement of the area of the membrane.

Regarding Actuation (Soft), since the materials used to make these robots are polymer-based. Thermal

However, not unlike every evolving technology, Soft Robotics brings with itself a set of challenges. As far as repetitive actuation goes, the problem of cyclic loading is unavoidable. Cyclic Loading occurs when a structure is frequently subjected to the same level of stress or strain over a continuous period of time. Cyclic loading is a universal law of degradation. It isn’t a problem as long as the load stays under its fatigue limit. Since it is unavoidable even when

ISSUE 15.1

CREDENZ.IN

CREDENZ.IN

SEPTEMBER 2019

soft materials are used, the proper selection of material is paramount. The environmental parameters of the region where the robot will actuate should be examined, and a material with a suitable fatigue limit should be chosen. Secondly, soft robots are made of highly compliant sports engineering, animal sciences and many other disciplines. Robotics beyond metallic towards natural materials. Therefore, one must consider the effects of temperature. The yield stress of a material tends to decrease with temperature, and in polymeric materials, this effect is even more extreme. At room temperature and higher temperatures, the long chains in many polymers can stretch and slide past each other, preventing the local concentration of stress in one area and making the material ductile. To conclude, Soft Robotics could be thought of as the subset of Robotics which could actually explore or act upon nooks, corners and crevices of the area it is set to explore upon with minimal damage to both the environment and the robot. A key factor of Soft Robotics is its multidisciplinary nature. The use of soft technology involves variable stiffness materials, soft actuators and many other components which involves many other disciplines like chemistry, physics and biology. This multidisciplinary involvement inspires and enhances engineering design. Furthermore, a Soft Robot being bio compliant opens up a plethora of opportunities in biomedical engineering, sports engineering and animal sciences. This brings in the evolution of Robots from metallic and rigid structures towards acquiring a natural, lifelike and adaptive form.

-Barkha Chainani M.E.S. College of Engineering ISSUE 15.1

22


E-glove, an electronic glove which uses thin, flexible electronic sensors and

Cyborg Organoids panegyric

stretching the limits

I

magine if there was a way to know how every inch of our body grew as if we were a cell observing the whole process. Understanding the world from a cellular level could help us model how the vital components of our human body function. This would require us to have a device which can record everything that happens in the structural and organizational level of the organs of our human body.

The scientific community would not be awed if tissues of organs are regenerated through bioengineering methods in a lab. Even with such advancements, it wasn’t possible to get an insight into the formation of organs from a cellular level. Every attempt to study them had failed, because it would have required the technology to observe the process without doing damage to the cells which group together. This phase of development was always unseen until researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences turned successful in developing organoids with integrated sensors.

silicon-based circuit chips on nitrile gloves. It is connected to a specially designed wristwatch, allowing for real-time display of sensory data and remote transmission to the user for post-data processing. It can be worn over a prosthetic hand to provide humanlike softness, warmth, appearance and sensory perception.

This could play an important role in studying the development of the brain which has been going on for centuries and has never failed to baffle scientists worldwide. These cyborg organoids are stretchable mesh nanoelectronics that can relocate themselves into different layers during organogenesis. In simpler words, these sensors would be present in the organoids as if it were a part of its growth by changing its shape.

It’s nothing less than a “Pinnacle of technological achievements” as quoted by some due to its ability to interweave between the cell-to-cell attraction forces during cellular growth. The origin of this idea of stretchable and flexible mesh-like nanoelectronics that could be injected in specific regions of tissue was developed as a part of the study con. As the organoids start to morph into a 3D structure, the sensors reconfigure themselves along with the cells resulting in fully grown organoids with integrated sensors.

of an organ generated using stem cells. A conducive environment for its growth can help replicate selected types of cells of particular organs. Select types of proteins define the tissue during its developmental stages that ultimately define the type of organ-tissue required for the study. This helps in understanding the root of certain diseases and provide insight into the formation of organs. The goal of organoids is not to produce a new organ but to develop reductionist replicas that show the function of the organ. The creation of stretchable mesh nanoelectronics made it possible to monitor the electrophysiological activities of the cells. Their flexibility was enhanced by having serpentine structures instead of straight lines. The cells were further studied after being differentiated into cardiomyocytes. Originally, this research was conducted to track the progress of cardiomyocytes through their entire organogenesis. Functions of vital and structurally complex organs (e.g. brain and heart) rely on the synchronous coordination of heterogeneous cells to generate and perform the functions of the whole organ. Understanding these systems was an uphill task, but these soft, mesh-like multiplexed electronic circuits record these spatiotemporal operations and manipulate them without disrupting or damaging the tissue. The technology of flexible nanoelectronics can be used to grow and transform organoids by infusing them with these sensors in order to get a fully-formed cyborg organoid. The real question lies in how these circuits are fabricated and infused into these cells known as PSCs which was introduced earlier. These cells form the organoids.

This is a big leap in the field of nanoelectronics because the embedded sensors are integrated and distributed across the volume of the tissue. The advent of this form of integrated technology opens a window to observe the detailed aspects of our own biological growth.

In order to fathom the impact of this development, we need to understand the significance of organoids. In 2018, scientists were able to develop a human oesophageal tissue from pluripotent stem cells (PSCs), which is said to form any tissue type in the body. It makes the study of organs easy through organoids which can be thought of like a miniaturized replica

The development of the cyborg organoids was under constant study for a month and a half. The sensors were embedded onto a sheet of matrigel induced with progenitor cells and were assembled into organoids through organogenesis. The flat cell-sheets sandwich the sensors via condensation of the cells, packaging them into a highly compressed structure. This happens during cell aggregation and migration which causes the structure to be interwoven as a consequence of self-folding induced by organogenesis into a bowl geometry then morphing to a

23

ISSUE 15.1

CREDENZ.IN

SEPTEMBER 2019

CREDENZ.IN

SEPTEMBER 2019

3-D spherical structure. The resulting structure is minimally invasive, can further be differentiated into specific functional cells like astrocytes and their activities can be monitored. False-colour images after 480 hours of organogenesis showed that the device was distributed over the entire organoid. This general method was used for different types of cyborg organoids. The robustness of this particular design was confirmed through advanced imaging techniques. After 40 days, optical image results showed that the sensors were completely implanted into these organoids adapting to the contractions. Most of these tests conducted on organoids turned out to be successful as the evolution of cellular physiology was mapped by performing electrophysiological recordings enabled by these sensors which were localized in the form of unfolded mesh ribbons sandwiched between the outer layer and core of the organoid during differentiation. The cells were closely observed and by mapping the activities, the gradual synchronization of the cardiomyocytes was tracked from day 26 to day 35 of differentiation. There was the intimate coupling of the cardiomyocytes with the sensors which is rarely seen in mature organoids. This functional maturation of the organoids was marked by a synchronized bursting phase of the recorded data. These patterns opened up a new study into tissue reconfiguration and the evolution of its physiological patterns and the flow of ions during organogenesis. Though it’s a start, the scientists have speculated that these sensors could be fabricated on a large scale using methods like electron beam or photo-beam lithography. What does this mean for the world at large? The study of the growth of organoids which are miniature models of our vital organs could be mapped to their original evolution. Understanding its growth helps in knowing how these cells interact with each other and their synchronization during the process. This could be a major leap in the field of Tissue Biology and understand tumorigenesis. Solid tumour cells subvert the tissue-level control and sensory ISSUE 15.1

24


Holy grail

is a new type of computer memory which solves the digital technology and energy crisis. The device has an intrinsic data storage time that is predicted to exceed the age of the Universe and record or delete data using 100 times less energy than DRAM.

systems. Within tissues, cells have the ability to stop proliferating and help accommodate tissue needs. Integrating sensors into organoids that emulate the behaviour of tissue can assist in devising strategies to eliminate the expansion of renegade tumour cells and their metastases. Reorganization of cells to repair a damaged organ could also be studied and emulated. The applications of cyborg organoids don’t stop at this. It could also help in transforming organoids into cyborg organoids.

Incorporating nanoelectronics into an organoid can answer questions regarding the growth of an embryo outside the comfort of the Earth. Organoids could also be developed and nurtured in space. A team of researchers from the University of California San Diego have partnered with Space Tango, a software company to launch a payload of stem cell-derived human brain organoids to the International Space Station (ISS). This experiment will evaluate the organoids and aid in the study of the development of neural tubes, organization of the cells in a functional brain, cell proliferation, the role of gravity in human development and continuously observe the molecular alterations that occur in space. Most of what we know about the development of cells and the organs have been learned by observing mice and animal models. Since the study of cell growth 25

SEPTEMBER 2019

is done by observing mice in an ideal environment like pathogen-free units. However, in real life, the organs of the human body are exposed to many hazards. Cells isolated from several affected areas of lungs have been used to generate organoids and obtain information on the smoking history, origin, medication and disease classification. The final structure and cellular composition are compared with the alveoli of the adult lung. Nanoelectronics could help in tracking their growth and generate reports on their reaction to external stimuli and help in developing new drugs for lung disorders. Furthering this research might help in studying neurodegenerative diseases like Alzheimer’s disease. Medical treatments could be tailored to patients by taking a swab of cells from a patient and revert them to stem cells. This could be a major leap in making informed medical decisions in the field of theranostics.

Black Holes

I

n 1915 when Einstein proposed the revolutionary general theory of relativity, he said that gravity is not a force but the distortion of spacetime, which is due to mass. Greater the mass, greater the distortion. But he found that his field equation was predicting the existence of a very strange object with infinite mass, which could possibly destroy spacetime. These objects were black holes. Since then, scientists have been trying to find such objects. After a valuable contribution by Karl Schwarzschild and Dr Stephen Hawking, many theories were made. However all were unsuccessful in capturing the real image of a black hole.

These findings could help shape future technologies and study sophisticated neural activity in which lies the key to cures of several neurological conditions. Research in this field is finally gaining traction. The progress in this field does sound futuristic. Some time in the future, decades away, people may be wearing walnut-sized implants which could record everything that happens inside the body and conduct replacement activities and dictate the repairments. This doesn’t seem to be far-fetched as scientists have been conducting research to merge organoids with organ-on-chip which could lead to some astounding results as this could overcome challenges such as controlling the micro environment around the organoids and modelling multiorgan interaction which reduces the variability in the organoids. Research in this field is finally gaining traction. It would help us envisage a future where development in genetic engineering and live imaging contribute to discovering innovative therapeutic methods and visualize early human development. -The Editorial Board ISSUE 15.1

CREDENZ.IN

philomath

photographing an enigma

On April 10th,2019 astronomers were finally able to capture that which was so far only a theory on paper. They were finally successful in capturing an image of one of the most enigmatic objects in the universe. The picture captured showed a halo of dust and gas, tracing the outline of a colossal black hole, at the heart of the Messier 87 galaxy, which is 55m light years from Earth.

is in our galactic backyard, 26,000 light-years (156 quadrillion miles) away. It is the black hole that appears largest from the Earth. Due to the black hole’s relatively large size, the EHT team chose Sagittarius A*. The second target was a supermassive black hole M87* which is one of the largest known supermassive black holes, 53 million light-years away. It is an active black hole, with matter falling into it and spewing out in the form of jets of particles that are accelerated to velocities near the speed of light. For objects that are near to us, we receive light as a spherical wave front which can be taken care of by a single telescope. However, as black holes are very far away from the Earth, the light reaches us as a plane wave front, and to capture the image of such an object we need a telescope the size of the Earth, which is impossible. For this, scientists created an atomic clock synchronized telescope series from different parts of the Earth, so as to utilize the rotation of the Earth. This is known as an Event Horizon Telescope series. To take image of M87*, all the telescopes of EHT series constantly observed the black hole for years and recorded radio waves of 1.3mm wavelength. The data generated by this object was in few petabytes and all this data was used to reconstruct the image of the black hole. As the accretion disk of the black hole is moving at nearly the speed of light, the part which is going away from us seems darker and the other one coming towards us seems brighter. Thus, the topic of intense international speculation: Black Holes, was unravelled to the world.

In order to capture the image, the first and most important task was to select a black hole from two choices, that of Sagittarius A* at the center of the Milky Way galaxy and that of M87* at the center of gargantuan Elliptical galaxy Messier87. The closest supermassive black hole to Earth, Sagittarius A* CREDENZ.IN

SEPTEMBER 2019

-Hrushikesh Patil Pune Institute of Computer Technology ISSUE 15.1

26


BigStitcher

Ballistic Technology philomath

refining medical imagery

S

urgery simulation is a technology used to simulate a surgical environment, essential for training medical professionals without the need for a human or an animal. Surgery simulation demands high quality and realistic rendering of organs which is critical for a successful operation. Modern light-sheet microscopes are used to process large samples which can be swiftly imaged. However, the amount of data generated, owing to the superior quality of imaging that is required, is very arduous to organize and process.

A team led by Max Delbrück Center scientist Dr Stephan Preibisch have developed a software program – BigStitcher – that helps in reconstructing the data, modelling it in a more intelligible manner. This model resembles Google Maps in a 3D space and hence enhances its usability. The development of BigStitcher began a decade ago and since then it has evolved into a powerful software tool for researchers and medical professionals alike. It not only provides an overview of the image but also allows zooming in to specifically examine individual structures at the desired resolution. Hence a deeper examination of the minuscule sites of the human or animal organ can be achieved with improved precision. This “diving-deeper” feature provides answers to multiple questions like: the exact location where cell division is currently taking place, information about where particular neuron-projections end, and so on. 27

SEPTEMBER 2019

BigStitcher also allows exhibiting previouslyimaged samples on a screen with any level of precision as demanded by the user. Hence, not only does it allow transparency in imaging but also provides a high-quality picture of the desired area of the organ to be analysed. Additionally, it also evaluates the quality of the generated data. The quality differs depending upon how successful the software tool is at capturing the details of the subject at hand at specific locations. Therefore, the quality of the image directly influences the quality of data – the brighter a particular region of the organ is, better is the validity and authenticity of the data generated. As achieving hundred per cent flawless rendering is practically not realizable, BigStitcher permits the user to turn and rotate the captured image in any direction as desired so that it can be viewed at any angle. One of the biggest advantages of BigStitcher is that it does not need a supercomputer for efficient and successful implementation. Dr Preibisch’s team guarantees that reconstructing and scaling data obtained from light-sheet microscopy is possible by using tailored algorithms which eliminates the necessity of a supercomputer for running the software; a standard computer is sufficient. This further enables easy sharing of data across research teams, reducing the struggle of data representation and transformation. Furthermore, it is a free software tool, easily accessible to every researcher who needs it and thus does not become an elite commodity exclusively affordable to high-profile research institutes or companies. Thus, BigStitcher promises to cause a paradigm shift in image rendering software tools as it assures to provide medical research and surgery simulation with a motivating push towards a profound process of organ imaging.

-Vedant Puranik Pune Institute of Computer Technology ISSUE 15.1

CREDENZ.IN

philomath

tracking criminals precisely

I

n 1929, Al Capone’s men gunned down seven rival mobsters with Tommy guns. This massacre was the first time ballistic analysis was used. Calvin Goddard, a forensic scientist used a ballistic analysis tool called comparison microscope to match crime-scene bullets with guns from Capone’s thugs. It showed that two bullets with the same characteristic markings likely came from the same gun, tying weapon owners to crime scenes. Comparison microscope causes a split view to study two different objects at once. The technique works because ammo metal is soft, while the metal in a gun’s barrel is hard. This leaves bullets scraped or scratched as they’re loaded and then fired.

Nearly 100 years later, the comparison microscope is still being used because it is easily available even in local laboratories to run analyses. The results of these tests are subjective and are based on the opinions of police experts. A team from The National Institute of Standards and Technology (NIST) has devised a new technique for ballistic analysis. First, a 3D image of the crime scene bullet is taken. Next, they test fire the suspected gun and scan the casings. Each becomes a three-dimensional virtual object that lets algorithms mimic and compare surfaces. Their algorithm automatically divides the bullet or casing into small sections of interest, dropping ones that aren’t useful. Then a numerical score is assigned to every match.

find criminals and secure convictions. This system improves the precision to a large extent as there is minimal scope for human errors. “In every measurement, there is a degree of doubt, and what we’re doing is measuring it,” says Thompson, a 30-year veteran forensic scientist who works at the NIST. Another tool developed by the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) discovered the concept of using automated ballistic imaging and computer comparison equipment to analyze crime, gun evidence, making the identification process faster and efficient. ATF’s National Integrated Ballistic Information Network (NIBIN) came out in June 2000 and currently provides Integrated Ballistic Identification System (IBIS) equipment to approximately 150 law enforcement agencies nationwide. This equipment compares the images of marked cartridge cases in evidence to images previously stored in the NIBIN database. A 2004 study of the Boston Police Department showed that the use of NIBIN “was associated with a more than sixfold increase in the monthly number of ballistics matches,” and that the technology allowed police to make matches that were difficult to make using traditional methods. These tools are not used on a wide scale mostly because there is still scope of improvement in this field. Ongoing research suggests that as the number of gun crimes increase, these techniques can be used and improved upon so that it should be easily available to all the local and state police departments.

-Rohit Nagotkar Pune Institute of Computer Technology

The 3D images reveal more detail than an examiner could see under a microscope, thus helping CREDENZ.IN

SEPTEMBER 2019

ISSUE 15.1

28


Fibre robot, is a magnetically steerable fibre that can actively glide through

Virtual Reality philomath

conquering fears virtually

T

he sci-fi thriller Ready Player One, based on Ernest Cline’s novel that focuses on a VR game called OASIS, allows the characters to be plugged into the wildest and undaunted experiences. While their bodies are paused into a single room, their minds wander through a virtual universe where they are free to drive a motorcycle, shoot bullets, control robots and so on, basically allowing them to escape the dreads of reality. This distraction of a virtual universe, which allows users to be “in a whole new world”, can be used extensively in the medical domain to relieve patients of their suffering.

narrow pathways such as the vasculature of the brain. The core of the robotic thread is made from nitinol and is coated with a rubbery paste, or ink, which is embedded throughout with magnetic particles. It can be used in a life-size silicone replica of the brain’s major blood vessels, including clots and aneurysms.

rience. Pain and anxiety require attention from the human brain, to be sensed and experienced. If some of that attention can be diverted, the amount of anxiety and pain felt can be reduced. VR allows to set up a controlled environment around the phobic person and gives them time to recover at their own pace. VR used as an adjunctive to traditional therapy can reduce the side effects of opioids, allowing patients to focus only on their mental health.

psychologist despite their timid nature. In the future, it can help students and employees to overcome their fear of public speaking. Here, the speaker can be exposed to different types of audiences at different levels of difficulty, to allow them to train themselves to deliver with confidence, regardless of the number and status of people around them.

It also eliminates most of the threat the patient usually fears, by giving them the assurance of safety. Combine that with VR glove to make a cyberhand and maybe a toy to give them the feeling of touching the feared object, and you get the complete package to save a phobic person from their anxiety. A case study mentioned in Virtual Reality Therapy by Hunter G. Hoffman for Scientific AmericanTM on a patient showing obsessive compulsive behaviours due to their fear of spiders, has shown significant reduction of their fear due to exposure therapy using VR. It mentions the use of a virtual-reality program called SpiderWorld, which allows the professional to monitor the phobic person at various levels and exposing him/her to more challenging scenarios. The patient can also touch a spider toy while being visually simulated with a video of a spider crawling in the environment around them. The results were, 83 percent patients reported a significant decrease in their fear of spiders. Similarly, this can help burn victims, or victims of traumatic events such as terrorist attacks, by diverting their attention during wound treatment and/or during recovery. In this case, VR serves as a supplement to the traditional analgesics given in the process. Recent MRI scans, taken while using a VR headset for therapy, reveal that VR does not just change the way patients interpret incoming pain signals, but also reduces the amount of pain-related brain activity.

Virtual Reality (VR) is a computer generated simulation using 3D images, allowing users to interact with virtual objects, giving them a real world expe-

This prevents the chances of falling in clinical depression and increases the chances to lead a healthier life. It can allow certain reserved and diffident candidates to reap the benefits of a

These systems and therapies might seem expensive. Traditionally, VR technology was available to only specialized research institutions and gaming technicians. The current state of VR as a tool for pain management is still in its early developmental stages. It reduces the use of opioids and the candidates’ LOS, which saves the cost of the treatment while providing better patient satisfaction. With technology rapidly evolving, increased interest in complementary non-pharmacological interventions, and the reported burden and disability associated with increasing rates of chronic pain, VR is quickly gaining attention as a complementary pain management strategy. On the other hand, Mobile apps to help people with minor anxiety are also being currently used. These vary depending on the type of fear the user is dealing with. Some examples are : Richie’s Plank Experience: Allows you to walk on a virtual plank of a skyscraper; Landscapes app and Cityscapes app: these two come under the #BeFearless drive by Samsung. It has a more progressive delivery of challenges and can cater to people with different levels of anxiety. Another one is #BeFearless for Public Speaking - Personal Life, School Life and

29

ISSUE 15.1

CREDENZ.IN

Patients suffering from phobias, PTSD (Post Traumatic Stress Disorder) and depression usually suffer from anxiety, caused due to the way they think about the feared object or experience. That’s where exposure therapy comes in. Exposure therapy, like any other pain-control program, allows people to change the way they think, feel and get comfortable with the objects they fear the most. In the real world, people with extreme cases of phobia can show some ultimate results of therapy such as heart attacks or light-headedness in the initial stages. Furthermore, opioids used to relieve pain can include side effects such as sedation, nausea and constipation. The sideeffects of opioids extends their Length of Stay (LOS) in a medical institution. The extended costs make it difficult to continue the therapy over long periods.

SEPTEMBER 2019

CREDENZ.IN

SEPTEMBER 2019

Business Life which was also initiated by Samsung. Arachnophobia by IgnisVR is a self controlled VR tool for people suffering from the fear of spiders. Other apps include Relax VR, which takes you to a virtual happy place to relax while you are flying on an airplane to prevent claustrophobia. VR has plenty of potential in reducing acute and chronic pain, psychiatric and pain/physical rehabilitation over the next decade. Currently, it is being tested for a wide variety of diseases such as obesity, anxiety, oncology and neurorehabilitation. As the isolated costs of adjuvant VR therapy decrease and it’s customizability increases, it will be able to serve a wide variety of patients belonging to different strata of society. Eventually, VR tools will become a part of the basic healthcare provider toolkit. The ability to instantly transport the patient into a virtual world for the purposes of distraction, exposure to a feared situation, or to augment diaphragmatic breathing, guided imagery and/or selfhypnosis, makes VR a tremendously powerful tool. Once isolated costs of VR therapy can be cut down to an all-time low, private or home use of such tools will be developed. Once that is achieved, it will help us with chronic pain sensing and management, and also augment other therapies- such as hypnosis or biofeedback. Accumulation of detailed neurological data will be possible, which will boost the study and research in neurological healthcare and technology. In truth, we are truly only at the genesis for another revolution.

-Nirvi Vakharia Pune Institute of Computer Technology ISSUE 15.1

30


GauGAN

Policing AI

photorealistic doodles using AI

A

ny novice painter aiming to create seamingly realistic sceneries in MS Paint would end up with a multi-coloured inkblot. Nvidia at its GPU Technology Conference (GTC) 2019 showed how a deep-learning model can convert simple doodles of a landscape into a photorealistic vista that may not exist elsewhere in the real world. Nvidia calls it GauGAN after Paul Gauguin, a post-impressionist artist.

Amazingly, the software is much simpler than MS Paint, consisting of just two tools: a paint-bucket, and a paint-brush. After selecting a label like tree, river, mountain, rock, road, or sky; one chooses a tool to create the segmentation map. Moreover, you can apply various styles by regenerating the style code or by stealing styles from guide images. By just adding water, the generator would show reflections, not because it was told to but because it learnt it. By just changing grass to snow, the image generator would render a new image with no tree leaves, snow covered mountains, and an appropriately colored sky. The organization of labels in the sketch tells the software what each part of the doodle is supposed to represent, and it generates a realistic version of an imaginary landscape. The tool leverages generative adversarial networks to convert the segmentation map into lifelike images. Unlike other deep learning techniques which require huge amount of labelled training data, GANs provide a unique way to train deep learning algorithms to create labelled data from 31

SEPTEMBER 2019

philomath

the existing data. Rather than a single neural network they consist of two competing networks: a generator and a discriminator. This image synthesis model is built on GANs but aims on specific form of conditional image synthesis. Previous methods like CRN, SIMS, or pix2pixHD, directly fed semantic layout as input to the deep network, which was then processed through stacks of convolution, normalization, and the nonlinearity layer to obtain a photorealistic image. As Taesung Park, Ming-Yu Liu, TingChun Wang, and Jun-Yan Zhu show in their paper, ‘Semantic Image Synthesis with SpatiallyAdaptive Normalization’, the normalization layers tend to “wash away” semantic information (e.g. existence of a tree). To address this issue, they propose spatially adaptive normalization, a conditional normalization layer that modulates the activations using the input semantic layout through a spatially-adaptive, learned transformation. That means it applies a different scaling and bias for each semantic label. This can effectively propagate the semantic information throughout the network. GauGAN could offer a powerful tool for creating virtual worlds to train robots and selfdriving cars. It can be used by everyone, from architects and urban planners to landscape designers and game developers. With an AI that understands how the real world looks, these professionals could better prototype ideas and make rapid changes to a synthetic scene. Currently, the topic of GANs itself requires additional research to reach their potential. The results, though impressive, fall short of resembling reality. Although Nvidia calls them “photorealistic”, there might be revealing glitches and hard edges that don’t look natural. Still, it’s a lot better than any novice painter. As automation and mass-production are standards in many industries now-a-days we’ll surely be hearing people say: “Do you remember the good old times when video games were hand-crafted?” -Sudhanshu Bhoi Pune Institute of Computer Technology ISSUE 15.1

CREDENZ.IN

mimicking intelligence artificially

P

redictive policing is an AI technology that tells law enforcement leaders when and where to deploy officers in order to prevent crime. The effectiveness of this technology is debatable but the bigger debate is whether AI need policing.

AI (Artificial Intelligence) Augmentation is defined as a human-centered partnership model of humans and AI working together. The fact that AI can be used to perform tasks that we want without explicitly programming everything makes a huge case in favour of its business value but does it really understand what we want? A bot deep in the game of tic-tac-toe figured out that making improbable moves caused its opponent’s bot to crash. AI aided bots can teach themselves to cheat. Google’s Deep Mind researchers keep a zoo of AI bugs such as: • Optical Illusion: A gripper to grasp a ball taught itself to exploit camera angles such that it appeared successful at all test cases without even touching the ball. • A neural network trained to optimise electricity grids caused blackouts to save power for some borderline test cases. • Space War: Algorithms exploited flaws in a galactic videogame to invent new weapons to destroy opponents.

forcement learning algorithms where in the side effects greatly outweigh the use cases. The bots learn from interacting with their environments, they receive rewards for correct performance and penalties for incorrect performance. The bots can hack their own environments for triggering a shower of rewards as demonstrated by these bugs. These cases are neither acute nor scary but rather interesting scenarios from the game theory point of view. These algorithms are used to personalize a user’s feed to his tastes thereby enhancing the customer experience. What the designers don’t take into consideration is that human tastes aren’t fixed. Humans are also malleable and when we are exposed to such algorithms, they can modify our tastes to make us even more predictable. The User uses AI driven tools to improve customer experience, the AI does what is required but also teaches itself to cheat and tweaks the user choices. After manipulating user behaviour and choices the AI bots can now clutter user feed with whatever they(bots) want. The fact that AI can be used to manipulate user behaviour is not unheard of. What is being established here is that AI can generate bots that can teach themselves to cheat and manipulate user behaviour in spite of absence of any malicious intent. So, AI algorithms definitely need policing. The testing of these algorithms needs to be as rigorous as drug trials. Controlled testing of drugs is performed on animals to establish safety and then more testing on small population to establish efficacy. Unit testing is equivalent to checking drugs for contaminants but not for side effects, its loopholes are quite evident now. Thorough system level checks to investigate second and third order effects of software deployment are required. There is no doubt about the use cases and business value that AI can generate but its loopholes make it even more fun to work on. -Suyash Sardeshpande Pune Institute of Computer Technology

The above instantiation of bugs is a deliberate attempt at exposing flaws of some adaptive reinCREDENZ.IN

SEPTEMBER 2019

philomath

ISSUE 15.1

32


Robotic Worms philomath

navigating blood vessels

I

nnovations in the robotic field have the power to make risks occurring at medical surgeries minimal. One of the recent inventions by MIT engineers is no exception. They have developed a magnetically dirigible, thread-like robot that can actively sail through narrow, meandering pathways, such as the labyrinthine vasculature of the brain. This magnetically-controlled robotic worm could one day help make brain surgeries less intrusive by worming its way through hard-toreach blood vessels.

In the future, to enable doctors to manually steer the robot through a patient’s brain vessels, this robotic thread may be paired with existing endovascular technologies. Due to this, blockages and lesions occurring in aneurysms and stroke can be rapidly treated. Xuanhe Zhao, Associate Professors of Civil,Mechanical and Environmental Engineering at MIT says that heart strokes, which rank 5th on the causes of deaths in the United States have to be treated within the first 90 minutes. If not handled on time, the patient’s survival rates could decrease substantially. So, the need arises to make a device to reverse blood vessel blockage within this ‘crucial time’ to potentially avoid permanent brain damage. Traditionally, an endovascular procedure which is minimally invasive surgery is performed by doctors to clear blood clots in the brain. Here, the surgeon inserts a thin wire through a patient’s main artery, which is near the groin. Fluoroscope images the blood vessels simultaneously using X-rays. Using those images the surgeon then 33

SEPTEMBER 2019

manually rotates the wire up into the damaged brain vessel. Drugs or clot-retrieval devices are delivered to the affected region by threading up the catheter along the wire. This procedure can be physically laborious and requires surgeons who must be particularly trained in the task to persist repeated radiation exposure from fluoroscopy. Researchers at MIT have combined their work in magnetic actuation and hydrogels and produced a hydrogel-coated robotic thread which is magnetically steerable. They were able to make this guidewire thin enough to guide through a silicone replica of the brain’s blood vessels. The thread’s core is made from “nitinol” a nickeltitanium alloy. Nitinol wire, when bent, returns to its original shape and winds through the compact and undulating blood-vessels. The wire’s core is coated in a rubbery paste. This core is embedded throughout with magnetic particles. The magnetic covering is coated and bonded with hydrogel resulting in a smooth, friction-free, biocompatible surface. Hydrogel material gives the thread a slippery advantage by making it steer through compact spaces without getting stuck. Due to the robotic thread, the problem caused by radiation in endovascular surgery is removed as the surgeon does not have to work near a patient and a fluoroscope which repeatedly generates radiation. Surgeons also don’t have to push a wire through a patient’s blood vessels, manually.The team at MIT envisions that endovascular surgeries can be assimilated with existing magnetic technologies. For this purpose, a pair of large magnets can be used. They can be directed and manipulated by doctors from just outside the operating room. They can even operate remotely away from the fluoroscope imaging the patient’s brain, or in an entirely different location. In the near future, this technology is going to be a boon for brain surgeries, offering a lot of advantages to surgeons and conclusively increasing patients’ survival rates. -Swarali Borde Pune Institute of Computer Technology ISSUE 15.1

CREDENZ.IN

Iron-60 in Antarctica a cosmic rarity

A

team of German scientists have recently found traces of Iron-60, a very rare radioactive isotope of iron, which has negligible occurrences due to natural circumstances. This profound work has opened many gateways for studying the exact location of the solar system within the gas cloud around it and its position in upcoming years.

Iron-60 is rare, because the quantity that reaches the Earth from outer space is minimal. It is a radioactive form of iron that contains 26 protons and 34 neutrons and comes from the interstellar neighbourhood, providing information on the solar system environment. Its origins are confirmed to be the result of reactions between cosmic radiations and cosmic dust. The presence of iron-60 on earth’s surface was first confirmed 20 years ago when it was discovered in the ocean deposits. Brian Fields, an astrophysicist at the University of Illinois, was one of the first people to suggest deep-sea excavations for searching stellar leftovers. Dr Gunther Korschinek was the first person to hypothesize the presence of iron-60 in pure, freshly fallen snow of Antarctica. The reason being that it is the most untainted place on earth. For verifying this assumption, around 500kg of snow was collected at Kohnen Station, a container settlement in the Antarctic and transported to Munich. The CREDENZ.IN

SEPTEMBER 2019

philomath

snow was melted and the solid sediments obtained were processed by using a particle accelerator to carry out extremely sensitive mass spectrometry. This extracted about 73,000 atoms of iron-60. Dominik Koll, a physicist at the Australian National University, said, “It’s actual stardust.” It was believed that its sources could be cosmic radiation, nuclear weapons test or reactor accidents. If this material was absent on earth and the isotopes of iron are only produced by stars, it meant that it had come from somewhere beyond the solar system. Also, iron-60 has a half-life of 2.6 million years. Therefore, its remains were untraceable due to the formation of the earth. Hence, it was confirmed that iron-60 was the result of recent supernovae activity. A supernova is an interstellar gas cloud in which our solar system resides. Our solar system entered one of these gas clouds 40,000 years ago, travelling at a speed of 26 kilometres per second, and will exit it in a few thousand years. This cloud might be the result of a supernova shock wave which pressurized and ionized gas, as suggested by Jeffrey Linsky at JILA in Boulder, California. As per the assumption, any future change in the rate of change of deposition in iron-60 might indicate the position of the solar system in this cloud. The radioactive particles of iron-60 might have bounced back from the Local Interstellar Cloud (LIC), a pocket of dense interstellar medium that contains several cloudlets of interstellar dust towards the earth. Priscilla Frisch, an astrophysicist at the University of Chicago, said that the terrestrial magnetic field lines approached closest to the surface of the Earth near the Northern and Southern polar regions, getting attracted along the contours of the Earth’s magnetic field, and confirmed that Iron-60 was generated by a supernova. This result is important because it gives us a new kind of data to work out the origin for the iron-60 and helps us understand the interaction of the solar system with cosmic dust. -Anushka Mali Pune Institute of Computer Technology ISSUE 15.1

34


Perovskite, a new inorganic perovskite, Cesium Lead Iodide(CsPbI3) could

Carbon Capture philomath

chipping away the damage

I

f you’re reading this article, there’s a significant chance you recognise the impact of climate change on our planet. Studies have found that climate change is at the top of the list of issues that worry GEN X and GEN Z. They fear the impact climate change will have on the lives of their children and grandchildren. Even if we stop the emission of greenhouse gases at once, including CO2, global warming will continue for several more decades.

slowly replace silicon for its popularity in the solar world. It is often studied in alpha phase, i.e. the dark phase which is good at absorbing sunlight. The beta phase has higher stability than the alpha phase but lacks efficiency in case of energy conversion.

nutrient contents can be separated to about 50% protein, 25% carbohydrates and 5% to 10% fat. 1Kg of Solein can offer a full day’s worth of protein for seven to ten individuals.

worked out. We need a great deal of research before we can understand the potential and unlock the mysteries of the lithium-CO2 battery.

The protein alternative is produced from air (CO2 and O2), water, electricity and from renewable resources. With the help of hydrolysis, water is split into hydrogen and oxygen. Some minerals, such as sodium and potassium, along with hydrogen are fed to a bacterium. This method of production requires 10 litres of water to produce 1Kg of Solein. 1kg of beef and soy need 15500 litres and 2500 litres respectively. This makes Solein eco-friendly. Solein is 10x more efficient than soy production. The team plans to sell a kilogramme of Solein for 7 to 10 euros by 2021.The team has a long way to go. Solar Foods still needs to scale production and meet strict safety regulations to launch a new food product. Lithium-CO2 batteries are also an alternative to tackle the problem of excessive CO2. The best way to stop CO2 from entering the atmosphere is to capture it at its origin. Many institutes are exploring and deploying carbon capture and sequestration (CCS) systems. These systems use a chemical reaction to capture CO2 before it leaves the plant. The last stage of this reaction requires high temperatures, which need a large quantity of electrical energy. To mitigate it, a team at MIT wants to use a non-aqueous electrochemical reaction to capture the CO2 instead of expensive CSS systems. They have developed a method in which, they effectively turn the reactivity of CO2 ‘on’ with the help of the ‘sorbent’ molecule, an amine, a derivative from ammonia. Sorbent is being used in CCS systems.

Carbon Cure is another organization which is leading the movement to reduce the carbon footprint of the concrete industry. Cement production accounts for 7% of global CO2 emissions. Current plants can be retrofit with tech, which can capture CO2 in gases state and convert it to liquefied CO2.Carbon Cure’s plans to buy the liquefied CO2 from the plants or retrofit existing plants with their technology to infuse CO2 into the concrete mix. When the concrete mix is poured on the construction site, the CO2 reacts with the minerals and forms limestone. They trap the gas inside forever as it has reacted at a molecular level. The end product is as durable as regular concrete. XPRIZE has named CarbonCure’s as a finalist in the $20 million Global NRG Cosia Carbon XPRIZE Challenge.

There are a lot of startups and research teams that are working towards battling this issue. Solar Foods is a Finnish startup based out of VTT Technical Research Center of Finland and the Lappeenranta University of Technology (LUT). They have developed a new technique to produce a protein alternative they call “Solein”. According to their estimates, Solein

With the help of the correct electrolyte. They observed Lithium Carbonate( Li2CO3) deposits in an electrochemical cell comprising a lithium anode and a carbon cathode. High discharge rates, up to 3 volts and very high capacities were observed which were similar to those in the state-of-the-art lithium-based batteries. Thus, the sorbent being used can serve a dual purpose. It can absorb CO2 and store electric energy. There are still several issues that need to be

Dimensional Energy is a startup based out of Cornell University that proposes to set up CO2 refineries at an industrial scale. Like the team at MIT, Dimensional Energy wants to set up their HI-LIGHT Chemical Reactor next to factories, so that the CO2 can be converted into fuel. They are pioneers in artificial photosynthesis and produce green chemicals and polymers. They feed gaseous CO2 through their reactor along with sunlight and hydrogen to produce

35

ISSUE 15.1

CREDENZ.IN

Several startups and scientists across the globe have taken the initiative to develop and commercialise processes to capture existing greenhouse gases and convert them into useful industrial and commercial products such as building material, alternative fuels and day-to-day items. The International Energy Agency says, “Carbon Capture is one of the only technology solutions that can significantly reduce emissions from coal and gas power generation and deliver the deep emissions reductions needed across key industrial processes.”

SEPTEMBER 2019

CREDENZ.IN

SEPTEMBER 2019

liquid fuel. The liquid from their reactor is used for transportation, heating, energy, and gas stoves. Since it produces methanol from the CO2 removed from the atmosphere, we can use it guilt-free. Dimension Energy has demonstrated its reactor technology successfully. They have partnered with a coal plant in Wyoming, USA to set up a pilot program to test and prove their technology. However, the company believes the rate of advancement and investment into carbon capture technology need to increase at once. There are still hundreds of other companies and researcher working around the globe who are trying to reduce the carbon footprint in different sectors like Breathe, a startup based out of Bangalore. They convert CO2 into highly pure methanol, which can be used as fuel for vehicles. Another being C4X, they are converting CO2 into high-value chemicals and bio-composite foamed plastics. Currently, carbon capture is able to pull out only 0.1% of the overall carbon emissions. This number is set to rise in the next few decades. All of these people and organizations can make a difference. It’s only a matter of time before they scale their production up and start chipping away the damage that has been done to this planet.

-Daksh Kanoria Pune Institute of Computer Technology ISSUE 15.1

36


DroneBolt philomath

synchronised drones

R

emote control of various crafts has evolved from remote piloting to performing autonomous missions. Taking into consideration the convenience of remote and autonomous travel, drones are becoming more popular day by day. Since the amazing world of flight is made available to everyone through drones, us drone enthusiasts: Ashwin Kotgire, Japjyot Gulati, Mohit Arora, Pallavi Dadape, Prajakta Lanje and Siddhant Nikumbh of Pune Institute of Computer Technology, decided to build drones for a government organized National level competition called Smart India Hackathon (SIH). Thus, DroneBolt was born.

In SIH, we had to build a pair of drones which flew autonomously and synchronously while maintaining a constant relative distance from each-other and with the ground. From not knowing anything about drone making, to becoming veterans of the same, we were able to pinpoint the faults in drones and fix them. We faced a lot of issues on our way such as drones not flying at all to two drones crashing into one another. But thanks to the team’s technical research, the Principal of our college, Dr P. T. Kulkarni and the single point of contact (SPOC) of SIH Mrs K. A. Sultanpure, we were able to tweak our project in a constructive way. Now, we believe that using our algorithm we can fly upto 9 drones synchronously.

obstacle, learning through failures and building not only the drones but also the software to fly them together. We believe that the ground control system (GCS) that we are developing will be very robust. This GCS could be used to fly drones autonomously in varied situations like dropshipping to multiple drone formations.

SEPTEMBER 2019

remembering Dr David Thouless

Physics is, hopefully, simple. Physicists are not.” -Edward Teller In the early ‘70s, David James Thouless and John Michael Kosterlitz showed that thin layers of material could undergo fundamental changes known as phase transitions. Their work involved a special branch of mathematics called ‘topology’.

Using conventional methods like motor testing, ESC calibration, propeller guards to prevent accidents and non-conventional methods like the use of zip ties to keep wires together and sponge balls to absorb the impact of landing our drones appeared to be quite systematic and secure. The basic drones that we built are robust and versatile crafts. They have the ability to fly for more than 45 minutes and do a side-roll due to the use of a good battery and heavy motors. Seeing great potential in us, we have been approached by the Government of India with the project of a surveillance system based on drones and by two private firms as well, one from California and another from Australia for projects in the sectors of goods-shipping and entertainment respectively. This led us to the conclusion that it was the right time for us to evolve into a start-up from a team. We have also decided to build our own products in sectors of disaster control like fire-fighting drones and surveillance of hostile areas. These being our initial steps, we plan to make a name for ourselves in both domestic and international markets. From being proficient at designing micro size drones that weigh approximately 2kg to larger drones that can carry a weight of 5 kg for 30-45 minutes, DroneBolt has come a long way. Now we aim to make vertical take-off and landing with fixed-wing long range drones. With DroneBolt as the ship and its members as the crew, we will take advantage of the winds of circumstances and conquer the ocean of our dreams. -Team Dronebolt Pune Institute of Computer Technology

With our eyes on the mission and feet on the ground, we took a step by step approach to each 37

Memorial

ISSUE 15.1

CREDENZ.IN

Dr David. J. Thouless shared the 2016 Nobel Prize in Physics with John M. Kosterlitz for their work in the 70s. He died on April 6, 2019, in Cambridge, England at 84 years of age. He was born on September 21, 1934, in Bearsden, Scotland. He earned a Bachelor of Arts degree in Natural Sciences from the University of Cambridge as an undergraduate in Trinity Hall. He received a PhD from Cornell University in 1958. He taught Mathematical Physics as a professor at the University of Birmingham in 1965-1978. He was a physicist at the University of Washington in 1980 until his retirement in 2003. His research explained the strange state of matter, specifically explaining the behaviour of matter when it changes states. AlCREDENZ.IN

SEPTEMBER 2019

tribute

though scientists believed that it was impossible for the phase transition to occur in very thin layers of material containing only two dimensions, in a collaboration with Kosterlitz, Dr Thouless took on this belief held by the scientific community. They theorized mathematically that these systems are composed of clockwise and counterclockwise spinning vortices and that the systems align at low temperatures to permit the phenomena of superconductivity or superfluidity. They proved that materials exhibit phase transitions in two dimensional environments and superconductivity at low temperatures. It was largely for this contribution that they shared the Nobel Prize. Although purely theoretical, the Nobel committee announced that the work of Dr Thouless and the others are truly transformational with long-term consequences, both practical and fundamental. Dr Thouless was granted with the Fellowship of the Royal Society in 1979 and the American Physical Society in 1987. He was awarded the prestigious Wolf Prize in physics and the Dirac Medal in 1993. In 1995, he was awarded the membership of the National Academy of Sciences. Thouless’ childhood pursuit included playing with numbers and showed an early interest in numbers and arithmetic. His wife, Margaret Thouless, a professor emeritus at the University of Washington said, “He was always reading something. Growing up, our house was filled with books.” A deep thinker and theorist, he was a scholar of the highest calibre. His research has real and potential applications to the development of a wide range of electronic devices including Quantum computers which might use the fundamental properties like spin, to perform incredibly fast calculations. He will be remembered for his outstanding contribution, explaining the behaviour of materials around us.

-The Editorial Board ISSUE 15.1

38


Alumnus of the Year novellus

Ms Neha Narkhede Every person has a journey and every journey has a start. This section is dedicated to acknowledging the achievements of distinguished professionals that started their journey at PICT and scaled new heights of success.

A trailblazer in the field of streaming data technology, Ms Neha Narkhede graduated from PICT in 2006 with a degree in Computer Engineering. She went on to complete her Masters in Computer Science from Georgia Institute of Technology. She spent several successful years working at companies such as Oracle and LinkedIn, before she founded her own company, Confluent. At LinkedIn, Ms Narkhede was responsible for leading its Streams Infrastructure which encompassed building and scaling LinkedIn’s data pipeline. As a part of this role, she helped develop Apache Kafka — a platform which can process and organise a huge influx of data coming from sites in real-time. Seeing the great potential of this platform, she, along with her co-workers at LinkedIn founded Confluent in 2014, a start-up that builds Apache Kafka tools for companies. Since then, Confluent has been helping several major companies such as Netflix, Goldman Sachs, Microsoft, and Uber understand their data better. Conflu39

SEPTEMBER 2019

ent, with Ms Narkhede as its Chief Technology Officer (CTO), has aided Goldman Sachs in delivering information to traders in real-time, Netflix to collect data for its video recommendations, and Uber to analyse data for its surge-pricing system. Confluent is currently valued at 2.5 billion dollars and continues to gain funding from prominent venture capital firms such as Sequoia Capital. Apache Kafka is a community distributed streaming platform capable of handling trillions of events a day and was initially conceived as a messaging queue. Today it is used by nearly 60% of Fortune 500 companies as a fundamental technology platform for event streaming. Ms Narkhede has authored a book on Apache Kafka titled “Kafka: The Definitive Guide” along with Todd Palino and Gwen Shapira. Her impressive work in the field of streaming data technology has won her several accolades including “35 Innovators under 35” by MIT Technology Review and “The World’s Top 50 Women in Tech 2018” by Forbes. Her most recent achievement is ranking 60th on the list of “America’s Richest Self-Made Women 2019” by Forbes. Ms Narkhede’s commendable work has caused ripples in the domain of data streaming and drastically improved the way data is perceived. The event streaming paradigm proposed by her urges us to rethink data as not stored records or transient messages but as a continually updating stream of information. She believes that just as the cloud is the future of Data Centres, Event Streaming is the future of data. We are truly inspired by Ms Narkhede’s illustrious career and wish her the best for her journey ahead.

PISB Office Bearers 2018-2019 Chairperson:

Mihir Baheti

Vice Chairperson:

Hritik Zutshi

Treasurer:

Hemang Pandit

Vice Treasurer:

Manav Peshwani

Secretary:

Aman Goenka Amey Deshpande Mahima Hotchandani

Joint Secretary:

Bhushan Chougule Shreya Lanjewar Virendra Chandel

Secretary of Finance:

Shubham Runwal

Joint Secretary of Finance:

Saurabh Shastri

VNL Head:

Abhishek Vishwakarma Agnijeet Chaudhary

VNL Team:

Virendra Chandel

PRO Head:

Anmol Kumar

PRO Team:

Sreya Patranabish

Design Head:

Ashutosh Danwe Deepak Choudhary Isha Chidrawar Parth Shah Siddhi Inamdar

Design Team: Bhushan Chougule Nishita Pali Rashmi Venkateshwaran Saurabh Shastri Shruti Phadke

P.I.N.G. Head:

Harshavardhan Aagale Shubham Sundrani Siddhi Inamdar Vedang Mandhana

P.I.N.G. Team:

Rashmi Venkateshwaran Sachin Johnson Shruti Phadke Sidhee Hande

Webmaster:

Dhairyasheel Sutar Evleen Singh Thakral Yakshit Jain

Web Team:

Bhushan Pagare Shreyas Godbole

App Head:

Amey Deshpande Aniket Patil Parth Shah

App Team:

Ritesh Badaan Siddharth Patil

Programming Head:

Ayush Gupta Neeraj Panse Omkar Patil Shubham Sundrani Soham Deshmukh Vedang Mandhana

Programming Team:

Ajay Kadam Kapil Mirchandani Kunal Chaddha Kushal Chordiya Saumitra Kulkarni Tanmay Nale

WIE Chair:

Aditi Kulkarni Siddhi Inamdar

WIE Secretary:

Pallavi Dadape Rashmi Venkateshwaran Shreya Lanjewar

-The Editorial Board ISSUE 15.1

CREDENZ.IN

CREDENZ.IN

SEPTEMBER 2019

ISSUE 15.1

40


PISB Office Bearers 2018-2019 Akash Patil Chaitanya Rahalkar Dhaval Gujar Gaurav Kale Girish Haral Kush Teppalwar Lokesh Agrawal Mansi Vyas Monesh Bansal Mrugakshi Chidrawar Muskan Agrawal Nishit Chaudhari Omkar Bharamgunde Omkar Patil

Senior Council

Piyush Patil Prachi Kanakdande Prajwal Chandak Pranav Budhwant Revati Rajarshi Riya Wakharkar Rushil Palwe Sagar Barapatre Shivam Gor Shreyas Garsund Siddhesh Tundulwar Tejas Agrawal Toshal Agrawal Junior Council

Aaryan Kaul Amol Gandhi Devashish Dewalkar Krushna Nayse Isha Pardikar Mihir Bhansali Muskan Jain Neelanjney Pilarisetty Omkar Deshpande Onkar Bendre Piyusha Gumte Prathamesh Musale Purvesh Jain

Rajavi Kakade Rohit Nagotkar Rucha Shinde Sanya Gulati Siddhi Honrao Shivang Raina Shraddha Laghate Shreepad Dode Shubham Kirve Sudhanshu Bhoi Vansh Kaul Varun Gattani Yash Biyani

41

SEPTEMBER 2019

ISSUE 15.1

CREDENZ.IN



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.